cce


Namecce JSON
Version 0.2.3 PyPI version JSON
download
home_pagehttps://github.com/EmorZz1G/CCE
SummaryConfidence-Consistency Evaluation for Time Series Anomaly Detection
upload_time2025-09-17 09:22:50
maintainerEmorZz1G
docs_urlNone
authorEmorZz1G
requires_python>=3.8
licenseNone
keywords time-series anomaly-detection evaluation metrics machine-learning confidence-consistency
VCS
bugtrack_url
requirements torch numpy scipy scikit-learn pandas matplotlib seaborn tqdm pyyaml deprecated
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CCE & RankEval: Confidence-Consistency Evaluation for Time Series Anomaly Detection

<p align="center">
  <a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/Python-3.8%2B-blue.svg" alt="Python"></a>
  <a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License"></a>
  <a href="https://pypi.org/project/cce/"><img src="https://img.shields.io/badge/PyPI-CCE-red.svg" alt="PyPI"></a>
  <a href="http://arxiv.org/abs/2509.01098"><img src="https://img.shields.io/badge/arXiv-2509.01098-b31b1b.svg" alt="arXiv"></a>
</p>

A comprehensive evaluation framework for time series anomaly detection metrics, focusing on confidence-consistency evaluation, robustness assessment, and discriminative power analysis. This implementation provides novel evaluation metrics and benchmarking tools to improve the reliability and comparability of anomaly detection models.

📄 **Paper**: [arXiv:2509.01098](http://arxiv.org/abs/2509.01098)  
🌐 **Website**: [CCE & RankEval](https://EmorZz1G.github.io/CCE/)

## 🚀 Features

- **Multi-metric Evaluation**: Support for various anomaly detection metrics (F1, AUC-ROC, VUS-PR, etc.)
- **Performance Benchmarking**: Latency analysis and theoretical ranking validation
- **Robustness Assessment**: Noise-resistant evaluation with variance consideration
- **Discriminative Power Analysis**: Both ranking-based and value-change-ratio-based approaches
- **Automated Testing**: Streamlined evaluation pipeline for new metrics
- **Real-world Dataset Support**: Comprehensive testing on multiple datasets

## 📦 Installation

### Option 1: Install from PyPI (Recommended)

```bash
pip install cce
```

### Option 2: Install from Source

```bash
# Clone the repository
git clone https://github.com/EmorZz1G/CCE.git
cd CCE

# Install dependencies
pip install -r requirements.txt

# Install in development mode
pip install -e .
```

**Note**: Build-related files are located in the `docs` directory. For detailed build instructions, please refer to `docs/*.md`.

## 🔧 Requirements

- Python 3.8+
- PyTorch
- NumPy
- Other dependencies (see `requirements.txt`)

## ⚙️ Configuration

After installation, you may need to configure the datasets path:

```bash
# Create a configuration file
cce config create

# Set your datasets directory
cce config set-datasets-path /path/to/your/datasets

# View current configuration
cce config show
```

For detailed configuration options, see [Configuration Guide](docs/CONFIGURATION_GUIDE.md).

## 📚 Quick Start

### Confidence-Consistency Evaluation (CCE)

```python
from cce import metrics
metricor = metrics.basic_metricor()
CCE_score = metricor.metric_CCE(labels, scores)
```

## RankEval

### Basic Usage

```bash
# Run baseline evaluation
. scripts/run_baseline.sh

# Run real-world dataset evaluation
. scripts/run_real_world.sh
```

### Adding New Metrics

1. **Implement the metric function** in `src/metrics/basic_metrics.py`:
   ```python
   def metric_NewMetric(labels, scores, **kwargs):
       # Your metric implementation
       return metric_value
   ```

2. **Add evaluation logic** in `src/evaluation/eval_metrics/eval_latency_baselines.py`:
   ```python
   elif baseline == 'NewMetric':
       with timer(case_name, model_name, case_seed_new, score_seed_new, model, metric_name='NewMetric') as data_item:
           result = metricor.metric_NewMetric(labels, scores)
           data_item['val'] = result
   ```

3. **Run the evaluation**:
   ```bash
   python src/evaluation/eval_metrics/eval_latency_baselines.py --baseline NewMetric
   ```

4. **View results** in `logs/NewMetric/`

## 🏗️ Project Structure

```
CCE/
├── src/                    # Source code
│   ├── metrics/           # Metric implementations
│   ├── evaluation/        # Evaluation framework
│   ├── models/            # Model implementations
│   ├── data_utils/        # Data processing utilities
│   ├── utils/             # Helper functions
│   └── scripts/           # Execution scripts
├──                   # Build and installation files
│   ├── setup.py           # Package setup configuration
│   ├── pyproject.toml     # Modern Python package config
│   ├── MANIFEST.in        # Package file inclusion
│   ├── BUILD.md           # Detailed build instructions
│   └── INSTALL.md         # Quick install guide
├── datasets/              # Dataset storage
├── logs/                  # Evaluation results
├── tests/                 # Test files
├── docs/                  # Documentation
├── requirements.txt       # Dependencies
├── setup.py               # Simple setup entry point
└── pyproject.toml         # Basic build configuration
```

## 📊 Supported Evaluations

- **Latency Analysis**: Metric computation time measurement
- **Theoretical Ranking**: Validation against theoretical expectations
- **Robustness Assessment**: Noise resistance evaluation
- **Discriminative Power**: Ranking-based and value-change-ratio analysis

## 🔄 Updates
- **2025-08-26**: Core evaluation framework implementation
- **2025-08-26**: Multi-metric support and benchmarking

## 📋 TODO List

- [ ] Automated standard evaluation pipeline
- [ ] Enhanced robustness assessment
- [ ] Advanced discriminative power analysis
- [ ] CI/CD integration for metric testing

## 🤝 Contributing

We welcome contributions! Please feel free to submit issues and pull requests.

## 📄 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## 🙏 Acknowledgments

- **FTSAD**: For providing the time series anomaly detection evaluation framework
- **SimAD**: For dataset load.
- **TSB-AD**: For model implementation code
- **Community**: For feedback and contributions

## 📞 Contact

For questions and support, please open an issue on GitHub or contact the maintainers.

## 📖 Citation

If you find our work useful, please cite our paper and consider giving us a star ⭐.

```bibtex
@article{zhong2025cce,
  title={CCE: Confidence-Consistency Evaluation for Time Series Anomaly Detection},
  author={Zhong, Zhijie and Yu, Zhiwen and Cheung, Yiu-ming and Yang, Kaixiang},
  journal={arXiv preprint arXiv:2509.01098},
  year={2025}
}
```

---

**CCE** - Making time series anomaly detection evaluation more reliable and comprehensive.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/EmorZz1G/CCE",
    "name": "cce",
    "maintainer": "EmorZz1G",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "EmorZz1G <csemor@mail.scut.edu.cn>",
    "keywords": "time-series, anomaly-detection, evaluation, metrics, machine-learning, confidence-consistency",
    "author": "EmorZz1G",
    "author_email": "EmorZz1G <csemor@mail.scut.edu.cn>",
    "download_url": "https://files.pythonhosted.org/packages/92/fa/5fc05a68cdff722d67d268aab3ecffbdf59d57b9bd925162a083c7d1192c/cce-0.2.3.tar.gz",
    "platform": null,
    "description": "# CCE & RankEval: Confidence-Consistency Evaluation for Time Series Anomaly Detection\n\n<p align=\"center\">\n  <a href=\"https://www.python.org/downloads/\"><img src=\"https://img.shields.io/badge/Python-3.8%2B-blue.svg\" alt=\"Python\"></a>\n  <a href=\"LICENSE\"><img src=\"https://img.shields.io/badge/License-MIT-green.svg\" alt=\"License\"></a>\n  <a href=\"https://pypi.org/project/cce/\"><img src=\"https://img.shields.io/badge/PyPI-CCE-red.svg\" alt=\"PyPI\"></a>\n  <a href=\"http://arxiv.org/abs/2509.01098\"><img src=\"https://img.shields.io/badge/arXiv-2509.01098-b31b1b.svg\" alt=\"arXiv\"></a>\n</p>\n\nA comprehensive evaluation framework for time series anomaly detection metrics, focusing on confidence-consistency evaluation, robustness assessment, and discriminative power analysis. This implementation provides novel evaluation metrics and benchmarking tools to improve the reliability and comparability of anomaly detection models.\n\n\ud83d\udcc4 **Paper**: [arXiv:2509.01098](http://arxiv.org/abs/2509.01098)  \n\ud83c\udf10 **Website**: [CCE & RankEval](https://EmorZz1G.github.io/CCE/)\n\n## \ud83d\ude80 Features\n\n- **Multi-metric Evaluation**: Support for various anomaly detection metrics (F1, AUC-ROC, VUS-PR, etc.)\n- **Performance Benchmarking**: Latency analysis and theoretical ranking validation\n- **Robustness Assessment**: Noise-resistant evaluation with variance consideration\n- **Discriminative Power Analysis**: Both ranking-based and value-change-ratio-based approaches\n- **Automated Testing**: Streamlined evaluation pipeline for new metrics\n- **Real-world Dataset Support**: Comprehensive testing on multiple datasets\n\n## \ud83d\udce6 Installation\n\n### Option 1: Install from PyPI (Recommended)\n\n```bash\npip install cce\n```\n\n### Option 2: Install from Source\n\n```bash\n# Clone the repository\ngit clone https://github.com/EmorZz1G/CCE.git\ncd CCE\n\n# Install dependencies\npip install -r requirements.txt\n\n# Install in development mode\npip install -e .\n```\n\n**Note**: Build-related files are located in the `docs` directory. For detailed build instructions, please refer to `docs/*.md`.\n\n## \ud83d\udd27 Requirements\n\n- Python 3.8+\n- PyTorch\n- NumPy\n- Other dependencies (see `requirements.txt`)\n\n## \u2699\ufe0f Configuration\n\nAfter installation, you may need to configure the datasets path:\n\n```bash\n# Create a configuration file\ncce config create\n\n# Set your datasets directory\ncce config set-datasets-path /path/to/your/datasets\n\n# View current configuration\ncce config show\n```\n\nFor detailed configuration options, see [Configuration Guide](docs/CONFIGURATION_GUIDE.md).\n\n## \ud83d\udcda Quick Start\n\n### Confidence-Consistency Evaluation (CCE)\n\n```python\nfrom cce import metrics\nmetricor = metrics.basic_metricor()\nCCE_score = metricor.metric_CCE(labels, scores)\n```\n\n## RankEval\n\n### Basic Usage\n\n```bash\n# Run baseline evaluation\n. scripts/run_baseline.sh\n\n# Run real-world dataset evaluation\n. scripts/run_real_world.sh\n```\n\n### Adding New Metrics\n\n1. **Implement the metric function** in `src/metrics/basic_metrics.py`:\n   ```python\n   def metric_NewMetric(labels, scores, **kwargs):\n       # Your metric implementation\n       return metric_value\n   ```\n\n2. **Add evaluation logic** in `src/evaluation/eval_metrics/eval_latency_baselines.py`:\n   ```python\n   elif baseline == 'NewMetric':\n       with timer(case_name, model_name, case_seed_new, score_seed_new, model, metric_name='NewMetric') as data_item:\n           result = metricor.metric_NewMetric(labels, scores)\n           data_item['val'] = result\n   ```\n\n3. **Run the evaluation**:\n   ```bash\n   python src/evaluation/eval_metrics/eval_latency_baselines.py --baseline NewMetric\n   ```\n\n4. **View results** in `logs/NewMetric/`\n\n## \ud83c\udfd7\ufe0f Project Structure\n\n```\nCCE/\n\u251c\u2500\u2500 src/                    # Source code\n\u2502   \u251c\u2500\u2500 metrics/           # Metric implementations\n\u2502   \u251c\u2500\u2500 evaluation/        # Evaluation framework\n\u2502   \u251c\u2500\u2500 models/            # Model implementations\n\u2502   \u251c\u2500\u2500 data_utils/        # Data processing utilities\n\u2502   \u251c\u2500\u2500 utils/             # Helper functions\n\u2502   \u2514\u2500\u2500 scripts/           # Execution scripts\n\u251c\u2500\u2500                   # Build and installation files\n\u2502   \u251c\u2500\u2500 setup.py           # Package setup configuration\n\u2502   \u251c\u2500\u2500 pyproject.toml     # Modern Python package config\n\u2502   \u251c\u2500\u2500 MANIFEST.in        # Package file inclusion\n\u2502   \u251c\u2500\u2500 BUILD.md           # Detailed build instructions\n\u2502   \u2514\u2500\u2500 INSTALL.md         # Quick install guide\n\u251c\u2500\u2500 datasets/              # Dataset storage\n\u251c\u2500\u2500 logs/                  # Evaluation results\n\u251c\u2500\u2500 tests/                 # Test files\n\u251c\u2500\u2500 docs/                  # Documentation\n\u251c\u2500\u2500 requirements.txt       # Dependencies\n\u251c\u2500\u2500 setup.py               # Simple setup entry point\n\u2514\u2500\u2500 pyproject.toml         # Basic build configuration\n```\n\n## \ud83d\udcca Supported Evaluations\n\n- **Latency Analysis**: Metric computation time measurement\n- **Theoretical Ranking**: Validation against theoretical expectations\n- **Robustness Assessment**: Noise resistance evaluation\n- **Discriminative Power**: Ranking-based and value-change-ratio analysis\n\n## \ud83d\udd04 Updates\n- **2025-08-26**: Core evaluation framework implementation\n- **2025-08-26**: Multi-metric support and benchmarking\n\n## \ud83d\udccb TODO List\n\n- [ ] Automated standard evaluation pipeline\n- [ ] Enhanced robustness assessment\n- [ ] Advanced discriminative power analysis\n- [ ] CI/CD integration for metric testing\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please feel free to submit issues and pull requests.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- **FTSAD**: For providing the time series anomaly detection evaluation framework\n- **SimAD**: For dataset load.\n- **TSB-AD**: For model implementation code\n- **Community**: For feedback and contributions\n\n## \ud83d\udcde Contact\n\nFor questions and support, please open an issue on GitHub or contact the maintainers.\n\n## \ud83d\udcd6 Citation\n\nIf you find our work useful, please cite our paper and consider giving us a star \u2b50.\n\n```bibtex\n@article{zhong2025cce,\n  title={CCE: Confidence-Consistency Evaluation for Time Series Anomaly Detection},\n  author={Zhong, Zhijie and Yu, Zhiwen and Cheung, Yiu-ming and Yang, Kaixiang},\n  journal={arXiv preprint arXiv:2509.01098},\n  year={2025}\n}\n```\n\n---\n\n**CCE** - Making time series anomaly detection evaluation more reliable and comprehensive.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Confidence-Consistency Evaluation for Time Series Anomaly Detection",
    "version": "0.2.3",
    "project_urls": {
        "Changelog": "https://github.com/EmorZz1G/CCE/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/EmorZz1G/CCE#readme",
        "Homepage": "https://github.com/EmorZz1G/CCE",
        "Issues": "https://github.com/EmorZz1G/CCE/issues",
        "Repository": "https://github.com/EmorZz1G/CCE.git"
    },
    "split_keywords": [
        "time-series",
        " anomaly-detection",
        " evaluation",
        " metrics",
        " machine-learning",
        " confidence-consistency"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4ef68e22ac27507459fb14bd583a818f598b2b72a0f11261c9ac50361829d491",
                "md5": "cf1d9e61428ff14d5efcfc61acadf58f",
                "sha256": "15578a95b22b8f97cf80de2bd27dfc61c1188b4a1631e9f5e5fd66939c447db3"
            },
            "downloads": -1,
            "filename": "cce-0.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cf1d9e61428ff14d5efcfc61acadf58f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 146107,
            "upload_time": "2025-09-17T09:22:47",
            "upload_time_iso_8601": "2025-09-17T09:22:47.840496Z",
            "url": "https://files.pythonhosted.org/packages/4e/f6/8e22ac27507459fb14bd583a818f598b2b72a0f11261c9ac50361829d491/cce-0.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "92fa5fc05a68cdff722d67d268aab3ecffbdf59d57b9bd925162a083c7d1192c",
                "md5": "5a1442c082d8c2669fbfe7eb8d9504bb",
                "sha256": "c13040c654a08c8b351918ff0144603f07f2fd10e9b47e23eb81e9d9841f485b"
            },
            "downloads": -1,
            "filename": "cce-0.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "5a1442c082d8c2669fbfe7eb8d9504bb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 231394,
            "upload_time": "2025-09-17T09:22:50",
            "upload_time_iso_8601": "2025-09-17T09:22:50.295950Z",
            "url": "https://files.pythonhosted.org/packages/92/fa/5fc05a68cdff722d67d268aab3ecffbdf59d57b9bd925162a083c7d1192c/cce-0.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-17 09:22:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "EmorZz1G",
    "github_project": "CCE",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.8.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.19.0"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    ">=",
                    "1.7.0"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    ">=",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    ">=",
                    "3.3.0"
                ]
            ]
        },
        {
            "name": "seaborn",
            "specs": [
                [
                    ">=",
                    "0.11.0"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    ">=",
                    "4.60.0"
                ]
            ]
        },
        {
            "name": "pyyaml",
            "specs": [
                [
                    ">=",
                    "5.4.0"
                ]
            ]
        },
        {
            "name": "deprecated",
            "specs": [
                [
                    ">=",
                    "1.2.0"
                ]
            ]
        }
    ],
    "lcname": "cce"
}
        
Elapsed time: 1.24744s