BICSdifftest


NameBICSdifftest JSON
Version 1.0.1 PyPI version JSON
download
home_pageNone
SummaryDifferential Testing Framework for Verilog Hardware Verification
upload_time2025-08-24 09:24:01
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords hardware verification differential testing verilog cocotb verilator pytorch golden model rtl verification hardware testing eda
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # BICSdifftest - Differential Testing Framework for Verilog Hardware Verification

A comprehensive differential testing framework that enables systematic verification of Verilog hardware designs against PyTorch golden models using cocotb and Verilator.

## Overview

BICSdifftest provides a complete solution for differential testing of hardware designs, comparing RTL implementations against software reference models at multiple checkpoint stages. The framework integrates PyTorch for golden model implementation, cocotb for testbench development, and Verilator for RTL simulation.

## Key Features

- **PyTorch Golden Models**: Implement reference models with checkpoint functionality for progressive debugging
- **Cocotb Integration**: Write testbenches in Python with full access to the differential testing framework
- **Verilator Backend**: High-performance RTL simulation with comprehensive build system
- **Multi-level Comparison**: Compare outputs at multiple pipeline stages and checkpoints
- **Comprehensive Reporting**: Generate detailed HTML, JSON, and JUnit XML reports
- **Parallel Execution**: Run multiple tests in parallel for faster verification
- **Advanced Logging**: Structured logging with performance monitoring and debug utilities
- **Configuration Management**: Flexible YAML/JSON configuration with environment overrides

## Architecture

```
BICSdifftest/
├── golden_model/          # PyTorch golden models
│   ├── base/             # Base classes and utilities
│   └── models/           # Specific golden model implementations
├── testbench/            # Cocotb testbench infrastructure
│   ├── base/             # Base testbench classes
│   ├── tests/            # Test implementations
│   └── utils/            # Testing utilities
├── sim/                  # Simulation backend
│   └── verilator/        # Verilator integration
├── config/               # Configuration management
├── scripts/              # Automation and utilities
├── examples/             # Example designs and tests
│   └── simple_alu/       # Simple ALU example
└── docs/                 # Documentation
```

## Quick Start

### Installation

#### Option 1: Install from PyPI (Recommended when published)
```bash
# Install BICSdifftest
pip install BICSdifftest

# Or install with all optional dependencies
pip install BICSdifftest[all]
```

#### Option 2: Install from Source
```bash
# Clone the repository
git clone https://github.com/BICS/BICSdifftest.git
cd BICSdifftest

# Install in development mode
pip install -e .[all]
```

### Prerequisites

- **Python 3.8+**
- **Verilator 4.0+** (RTL simulator)
- **PyTorch** (automatically installed)
- **CocoTB** (automatically installed)

### Create Your First Project

```bash
# Create a new workspace
bicsdifftest build my_hardware_project

# Navigate to the workspace
cd my_hardware_project

# Run the included counter example
make test-counter
```

### What You Get

The workspace includes:
- **Complete example designs** (Counter, ALU, FPU)
- **PyTorch golden models** with checkpoint functionality
- **CocoTB testbenches** for hardware simulation
- **Automated build system** with Makefile
- **Configuration templates** for easy customization

## Usage

### Creating a Golden Model

```python
from golden_model.base import PipelinedGoldenModel
import torch

class MyDesignGoldenModel(PipelinedGoldenModel):
    def __init__(self, data_width=32):
        super().__init__("MyDesignGoldenModel")
        self.data_width = data_width
        
        # Add pipeline stages
        self.add_stage(Stage1(), "preprocessing")
        self.add_stage(Stage2(), "computation")
        self.add_stage(Stage3(), "postprocessing")
    
    def _forward_impl(self, inputs):
        # Implement your design logic here
        result = self.process_inputs(inputs)
        
        # Add checkpoints for debugging
        self.add_checkpoint('intermediate_result', result)
        
        return result
```

### Writing a Testbench

```python
import cocotb
from testbench.base import DiffTestBase, TestSequence, TestVector

class MyDesignDiffTest(DiffTestBase):
    def __init__(self, dut):
        golden_model = MyDesignGoldenModel()
        super().__init__(dut, golden_model, "MyDesignDiffTest")
    
    async def setup_dut(self):
        # Initialize your DUT
        pass
    
    async def apply_custom_stimulus(self, test_vector):
        # Apply test inputs to DUT
        self.dut.input_data.value = test_vector.inputs['data']
        await RisingEdge(self.dut.clk)
    
    async def capture_dut_outputs(self):
        # Capture DUT outputs
        return {
            'result': int(self.dut.result.value),
            'valid': bool(int(self.dut.valid.value))
        }

@cocotb.test()
async def test_my_design(dut):
    diff_test = MyDesignDiffTest(dut)
    
    # Create test sequence
    test_vectors = [
        TestVector(inputs={'data': 0x12345678}, test_id="test_0"),
        TestVector(inputs={'data': 0xDEADBEEF}, test_id="test_1"),
    ]
    sequence = TestSequence("basic_tests", test_vectors)
    
    # Run differential test
    results = await diff_test.run_test([sequence])
    
    # Verify results
    assert all(r.passed for r in results)
```

### Configuration

Create a test configuration file:

```yaml
# my_design_test.yaml
name: "my_design"
description: "Test configuration for my design"

top_module: "my_design"
verilog_sources:
  - "rtl/my_design.sv"

golden_model_class: "golden_model.models.MyDesignGoldenModel"

verilator:
  enable_waves: true
  trace_depth: 99
  compile_args: ["--trace", "-Wall"]

cocotb:
  log_level: "INFO"
  test_timeout: 300

comparison:
  mode: "bit_exact"
```

### Running Tests

```bash
# Run specific tests
python scripts/test_runner.py --test-dirs examples/my_design

# Run in parallel
python scripts/test_runner.py --parallel 4

# Run with filtering
python scripts/test_runner.py --test-filter my_design --log-level DEBUG

# Generate only reports (after tests have run)
make reports
```

## Examples

### Simple ALU

The framework includes a complete Simple ALU example demonstrating:

- **RTL Design** (`examples/simple_alu/rtl/simple_alu.sv`): 32-bit ALU with multiple operations
- **Golden Model** (`golden_model/models/simple_alu_model.py`): PyTorch implementation with checkpoints
- **Testbench** (`examples/simple_alu/testbench/test_simple_alu.py`): Comprehensive cocotb testbench
- **Configuration** (`examples/simple_alu/config/simple_alu_test.yaml`): Complete test configuration

Features demonstrated:
- Multi-cycle operations (multiplication, division)
- Pipeline stage verification
- Corner case testing
- Random test generation
- Comprehensive error reporting

Run the example:
```bash
make test-alu
```

## Advanced Features

### Checkpoint-based Verification

The framework supports multi-level verification by comparing intermediate results:

```python
# Golden model creates checkpoints
def forward(self, inputs):
    stage1_result = self.stage1(inputs)
    self.add_checkpoint('stage1', stage1_result)
    
    stage2_result = self.stage2(stage1_result)  
    self.add_checkpoint('stage2', stage2_result)
    
    return final_result

# Testbench compares at each checkpoint
async def compare_checkpoints(self, golden_checkpoints, hardware_outputs):
    comparisons = []
    
    # Compare stage 1
    stage1_comparison = self.comparator.compare(
        golden_checkpoints['stage1'],
        hardware_outputs['debug_stage1'],
        name="stage1_verification"
    )
    comparisons.append(stage1_comparison)
    
    return comparisons
```

### Parallel Test Execution

Run tests in parallel for faster verification:

```bash
# Use 4 parallel workers
make test-parallel PARALLEL_JOBS=4

# Or with the test runner directly
python scripts/test_runner.py --parallel 8 --continue-on-error
```

### Advanced Comparison Modes

Support for different comparison strategies:

```yaml
comparison:
  mode: "absolute_tolerance"  # bit_exact, absolute_tolerance, relative_tolerance, ulp_tolerance
  absolute_tolerance: 1e-6
  
  # Signal-specific overrides
  signal_overrides:
    critical_output:
      mode: "bit_exact"
    approximate_result:
      mode: "relative_tolerance"
      relative_tolerance: 0.01
```

### Performance Monitoring

Built-in performance monitoring and reporting:

```python
with logger.timer("simulation_phase"):
    result = await run_simulation()
    
logger.log_comparison_result(
    signal_name="output",
    expected=golden_value,
    actual=hw_value,
    passed=comparison_passed
)
```

## Report Generation

The framework generates comprehensive reports:

- **HTML Reports**: Interactive reports with test summaries, detailed results, and error analysis
- **JSON Reports**: Machine-readable test data for integration with other tools
- **JUnit XML**: Compatible with CI/CD systems like Jenkins, GitHub Actions

Access reports in the `reports/` directory after running tests.

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Run the test suite: `make test`
6. Submit a pull request

### Development Setup

```bash
make dev-setup
```

This installs additional development tools including linting, formatting, and pre-commit hooks.

## Troubleshooting

### Common Issues

**Verilator not found**: Install Verilator and ensure it's in your PATH
```bash
# Ubuntu/Debian
sudo apt-get install verilator

# macOS with Homebrew
brew install verilator
```

**PyTorch installation issues**: Use conda for complex dependencies
```bash
conda install pytorch torchvision -c pytorch
```

**Cocotb import errors**: Ensure cocotb is properly installed
```bash
pip install cocotb cocotb-test
```

### Debug Mode

Run tests with maximum verbosity for troubleshooting:

```bash
make debug
```

This preserves all build artifacts and generates detailed debug logs.

### Getting Help

- Check the `logs/` directory for detailed execution logs
- Use `make status` to check framework installation
- Enable debug logging with `--log-level DEBUG`

## License

This project is licensed under the MIT License - see the LICENSE file for details.

## Acknowledgments

- Built on top of excellent open-source tools: Verilator, Cocotb, and PyTorch
- Inspired by industry best practices in hardware verification
- Designed for the BICS research community and hardware verification engineers

---

**BICSdifftest** - Bringing software-style testing methodologies to hardware verification.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "BICSdifftest",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Guolin Yang <curryfromuestc@gmail.com>",
    "keywords": "hardware verification, differential testing, verilog, cocotb, verilator, pytorch, golden model, rtl verification, hardware testing, eda",
    "author": null,
    "author_email": "Guolin Yang <curryfromuestc@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/55/f9/4c3ea97c4b0f4b9fb7c779865ebb54ca76bc319a0bf312701ff37050cf05/bicsdifftest-1.0.1.tar.gz",
    "platform": null,
    "description": "# BICSdifftest - Differential Testing Framework for Verilog Hardware Verification\n\nA comprehensive differential testing framework that enables systematic verification of Verilog hardware designs against PyTorch golden models using cocotb and Verilator.\n\n## Overview\n\nBICSdifftest provides a complete solution for differential testing of hardware designs, comparing RTL implementations against software reference models at multiple checkpoint stages. The framework integrates PyTorch for golden model implementation, cocotb for testbench development, and Verilator for RTL simulation.\n\n## Key Features\n\n- **PyTorch Golden Models**: Implement reference models with checkpoint functionality for progressive debugging\n- **Cocotb Integration**: Write testbenches in Python with full access to the differential testing framework\n- **Verilator Backend**: High-performance RTL simulation with comprehensive build system\n- **Multi-level Comparison**: Compare outputs at multiple pipeline stages and checkpoints\n- **Comprehensive Reporting**: Generate detailed HTML, JSON, and JUnit XML reports\n- **Parallel Execution**: Run multiple tests in parallel for faster verification\n- **Advanced Logging**: Structured logging with performance monitoring and debug utilities\n- **Configuration Management**: Flexible YAML/JSON configuration with environment overrides\n\n## Architecture\n\n```\nBICSdifftest/\n\u251c\u2500\u2500 golden_model/          # PyTorch golden models\n\u2502   \u251c\u2500\u2500 base/             # Base classes and utilities\n\u2502   \u2514\u2500\u2500 models/           # Specific golden model implementations\n\u251c\u2500\u2500 testbench/            # Cocotb testbench infrastructure\n\u2502   \u251c\u2500\u2500 base/             # Base testbench classes\n\u2502   \u251c\u2500\u2500 tests/            # Test implementations\n\u2502   \u2514\u2500\u2500 utils/            # Testing utilities\n\u251c\u2500\u2500 sim/                  # Simulation backend\n\u2502   \u2514\u2500\u2500 verilator/        # Verilator integration\n\u251c\u2500\u2500 config/               # Configuration management\n\u251c\u2500\u2500 scripts/              # Automation and utilities\n\u251c\u2500\u2500 examples/             # Example designs and tests\n\u2502   \u2514\u2500\u2500 simple_alu/       # Simple ALU example\n\u2514\u2500\u2500 docs/                 # Documentation\n```\n\n## Quick Start\n\n### Installation\n\n#### Option 1: Install from PyPI (Recommended when published)\n```bash\n# Install BICSdifftest\npip install BICSdifftest\n\n# Or install with all optional dependencies\npip install BICSdifftest[all]\n```\n\n#### Option 2: Install from Source\n```bash\n# Clone the repository\ngit clone https://github.com/BICS/BICSdifftest.git\ncd BICSdifftest\n\n# Install in development mode\npip install -e .[all]\n```\n\n### Prerequisites\n\n- **Python 3.8+**\n- **Verilator 4.0+** (RTL simulator)\n- **PyTorch** (automatically installed)\n- **CocoTB** (automatically installed)\n\n### Create Your First Project\n\n```bash\n# Create a new workspace\nbicsdifftest build my_hardware_project\n\n# Navigate to the workspace\ncd my_hardware_project\n\n# Run the included counter example\nmake test-counter\n```\n\n### What You Get\n\nThe workspace includes:\n- **Complete example designs** (Counter, ALU, FPU)\n- **PyTorch golden models** with checkpoint functionality\n- **CocoTB testbenches** for hardware simulation\n- **Automated build system** with Makefile\n- **Configuration templates** for easy customization\n\n## Usage\n\n### Creating a Golden Model\n\n```python\nfrom golden_model.base import PipelinedGoldenModel\nimport torch\n\nclass MyDesignGoldenModel(PipelinedGoldenModel):\n    def __init__(self, data_width=32):\n        super().__init__(\"MyDesignGoldenModel\")\n        self.data_width = data_width\n        \n        # Add pipeline stages\n        self.add_stage(Stage1(), \"preprocessing\")\n        self.add_stage(Stage2(), \"computation\")\n        self.add_stage(Stage3(), \"postprocessing\")\n    \n    def _forward_impl(self, inputs):\n        # Implement your design logic here\n        result = self.process_inputs(inputs)\n        \n        # Add checkpoints for debugging\n        self.add_checkpoint('intermediate_result', result)\n        \n        return result\n```\n\n### Writing a Testbench\n\n```python\nimport cocotb\nfrom testbench.base import DiffTestBase, TestSequence, TestVector\n\nclass MyDesignDiffTest(DiffTestBase):\n    def __init__(self, dut):\n        golden_model = MyDesignGoldenModel()\n        super().__init__(dut, golden_model, \"MyDesignDiffTest\")\n    \n    async def setup_dut(self):\n        # Initialize your DUT\n        pass\n    \n    async def apply_custom_stimulus(self, test_vector):\n        # Apply test inputs to DUT\n        self.dut.input_data.value = test_vector.inputs['data']\n        await RisingEdge(self.dut.clk)\n    \n    async def capture_dut_outputs(self):\n        # Capture DUT outputs\n        return {\n            'result': int(self.dut.result.value),\n            'valid': bool(int(self.dut.valid.value))\n        }\n\n@cocotb.test()\nasync def test_my_design(dut):\n    diff_test = MyDesignDiffTest(dut)\n    \n    # Create test sequence\n    test_vectors = [\n        TestVector(inputs={'data': 0x12345678}, test_id=\"test_0\"),\n        TestVector(inputs={'data': 0xDEADBEEF}, test_id=\"test_1\"),\n    ]\n    sequence = TestSequence(\"basic_tests\", test_vectors)\n    \n    # Run differential test\n    results = await diff_test.run_test([sequence])\n    \n    # Verify results\n    assert all(r.passed for r in results)\n```\n\n### Configuration\n\nCreate a test configuration file:\n\n```yaml\n# my_design_test.yaml\nname: \"my_design\"\ndescription: \"Test configuration for my design\"\n\ntop_module: \"my_design\"\nverilog_sources:\n  - \"rtl/my_design.sv\"\n\ngolden_model_class: \"golden_model.models.MyDesignGoldenModel\"\n\nverilator:\n  enable_waves: true\n  trace_depth: 99\n  compile_args: [\"--trace\", \"-Wall\"]\n\ncocotb:\n  log_level: \"INFO\"\n  test_timeout: 300\n\ncomparison:\n  mode: \"bit_exact\"\n```\n\n### Running Tests\n\n```bash\n# Run specific tests\npython scripts/test_runner.py --test-dirs examples/my_design\n\n# Run in parallel\npython scripts/test_runner.py --parallel 4\n\n# Run with filtering\npython scripts/test_runner.py --test-filter my_design --log-level DEBUG\n\n# Generate only reports (after tests have run)\nmake reports\n```\n\n## Examples\n\n### Simple ALU\n\nThe framework includes a complete Simple ALU example demonstrating:\n\n- **RTL Design** (`examples/simple_alu/rtl/simple_alu.sv`): 32-bit ALU with multiple operations\n- **Golden Model** (`golden_model/models/simple_alu_model.py`): PyTorch implementation with checkpoints\n- **Testbench** (`examples/simple_alu/testbench/test_simple_alu.py`): Comprehensive cocotb testbench\n- **Configuration** (`examples/simple_alu/config/simple_alu_test.yaml`): Complete test configuration\n\nFeatures demonstrated:\n- Multi-cycle operations (multiplication, division)\n- Pipeline stage verification\n- Corner case testing\n- Random test generation\n- Comprehensive error reporting\n\nRun the example:\n```bash\nmake test-alu\n```\n\n## Advanced Features\n\n### Checkpoint-based Verification\n\nThe framework supports multi-level verification by comparing intermediate results:\n\n```python\n# Golden model creates checkpoints\ndef forward(self, inputs):\n    stage1_result = self.stage1(inputs)\n    self.add_checkpoint('stage1', stage1_result)\n    \n    stage2_result = self.stage2(stage1_result)  \n    self.add_checkpoint('stage2', stage2_result)\n    \n    return final_result\n\n# Testbench compares at each checkpoint\nasync def compare_checkpoints(self, golden_checkpoints, hardware_outputs):\n    comparisons = []\n    \n    # Compare stage 1\n    stage1_comparison = self.comparator.compare(\n        golden_checkpoints['stage1'],\n        hardware_outputs['debug_stage1'],\n        name=\"stage1_verification\"\n    )\n    comparisons.append(stage1_comparison)\n    \n    return comparisons\n```\n\n### Parallel Test Execution\n\nRun tests in parallel for faster verification:\n\n```bash\n# Use 4 parallel workers\nmake test-parallel PARALLEL_JOBS=4\n\n# Or with the test runner directly\npython scripts/test_runner.py --parallel 8 --continue-on-error\n```\n\n### Advanced Comparison Modes\n\nSupport for different comparison strategies:\n\n```yaml\ncomparison:\n  mode: \"absolute_tolerance\"  # bit_exact, absolute_tolerance, relative_tolerance, ulp_tolerance\n  absolute_tolerance: 1e-6\n  \n  # Signal-specific overrides\n  signal_overrides:\n    critical_output:\n      mode: \"bit_exact\"\n    approximate_result:\n      mode: \"relative_tolerance\"\n      relative_tolerance: 0.01\n```\n\n### Performance Monitoring\n\nBuilt-in performance monitoring and reporting:\n\n```python\nwith logger.timer(\"simulation_phase\"):\n    result = await run_simulation()\n    \nlogger.log_comparison_result(\n    signal_name=\"output\",\n    expected=golden_value,\n    actual=hw_value,\n    passed=comparison_passed\n)\n```\n\n## Report Generation\n\nThe framework generates comprehensive reports:\n\n- **HTML Reports**: Interactive reports with test summaries, detailed results, and error analysis\n- **JSON Reports**: Machine-readable test data for integration with other tools\n- **JUnit XML**: Compatible with CI/CD systems like Jenkins, GitHub Actions\n\nAccess reports in the `reports/` directory after running tests.\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests for new functionality\n5. Run the test suite: `make test`\n6. Submit a pull request\n\n### Development Setup\n\n```bash\nmake dev-setup\n```\n\nThis installs additional development tools including linting, formatting, and pre-commit hooks.\n\n## Troubleshooting\n\n### Common Issues\n\n**Verilator not found**: Install Verilator and ensure it's in your PATH\n```bash\n# Ubuntu/Debian\nsudo apt-get install verilator\n\n# macOS with Homebrew\nbrew install verilator\n```\n\n**PyTorch installation issues**: Use conda for complex dependencies\n```bash\nconda install pytorch torchvision -c pytorch\n```\n\n**Cocotb import errors**: Ensure cocotb is properly installed\n```bash\npip install cocotb cocotb-test\n```\n\n### Debug Mode\n\nRun tests with maximum verbosity for troubleshooting:\n\n```bash\nmake debug\n```\n\nThis preserves all build artifacts and generates detailed debug logs.\n\n### Getting Help\n\n- Check the `logs/` directory for detailed execution logs\n- Use `make status` to check framework installation\n- Enable debug logging with `--log-level DEBUG`\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Acknowledgments\n\n- Built on top of excellent open-source tools: Verilator, Cocotb, and PyTorch\n- Inspired by industry best practices in hardware verification\n- Designed for the BICS research community and hardware verification engineers\n\n---\n\n**BICSdifftest** - Bringing software-style testing methodologies to hardware verification.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Differential Testing Framework for Verilog Hardware Verification",
    "version": "1.0.1",
    "project_urls": {
        "Bug Reports": "https://github.com/curryfromuestc/BICSdifftest/issues",
        "Changelog": "https://github.com/curryfromuestc/BICSdifftest/blob/main/CHANGELOG.md",
        "Documentation": "https://bicsdifftest.readthedocs.io/",
        "Homepage": "https://github.com/curryfromuestc/BICSdifftest",
        "Repository": "https://github.com/curryfromuestc/BICSdifftest.git"
    },
    "split_keywords": [
        "hardware verification",
        " differential testing",
        " verilog",
        " cocotb",
        " verilator",
        " pytorch",
        " golden model",
        " rtl verification",
        " hardware testing",
        " eda"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "cce5004feb2324b87566354f79aa7b25a16d61875ac91d4dd1d04d19450d406c",
                "md5": "894d09342b76ee99e5c3ccfc8081f4eb",
                "sha256": "9fb37d7e72173caf1376dd1f27e923f58a56ded3c0228159ff40bfdc6731d654"
            },
            "downloads": -1,
            "filename": "bicsdifftest-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "894d09342b76ee99e5c3ccfc8081f4eb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 1787481,
            "upload_time": "2025-08-24T09:23:56",
            "upload_time_iso_8601": "2025-08-24T09:23:56.931810Z",
            "url": "https://files.pythonhosted.org/packages/cc/e5/004feb2324b87566354f79aa7b25a16d61875ac91d4dd1d04d19450d406c/bicsdifftest-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "55f94c3ea97c4b0f4b9fb7c779865ebb54ca76bc319a0bf312701ff37050cf05",
                "md5": "5c86739c0f031bf80bdc07df3c1dd021",
                "sha256": "7fe0a98b16cf4f0fa0bd911552c9bb9c1ce05c30198a951b4ec9f96ce114a289"
            },
            "downloads": -1,
            "filename": "bicsdifftest-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "5c86739c0f031bf80bdc07df3c1dd021",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 1741411,
            "upload_time": "2025-08-24T09:24:01",
            "upload_time_iso_8601": "2025-08-24T09:24:01.055312Z",
            "url": "https://files.pythonhosted.org/packages/55/f9/4c3ea97c4b0f4b9fb7c779865ebb54ca76bc319a0bf312701ff37050cf05/bicsdifftest-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-24 09:24:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "curryfromuestc",
    "github_project": "BICSdifftest",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "bicsdifftest"
}
        
Elapsed time: 0.86341s