# CCE: Confidence-Consistency Evaluation for Time Series Anomaly Detection
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://pypi.org/project/cce/)
A comprehensive evaluation framework for time series anomaly detection metrics, focusing on confidence-consistency evaluation, robustness assessment, and discriminative power analysis.
## ๐ Features
- **Multi-metric Evaluation**: Support for various anomaly detection metrics (F1, AUC-ROC, VUS-PR, etc.)
- **Performance Benchmarking**: Latency analysis and theoretical ranking validation
- **Robustness Assessment**: Noise-resistant evaluation with variance consideration
- **Discriminative Power Analysis**: Both ranking-based and value-change-ratio-based approaches
- **Automated Testing**: Streamlined evaluation pipeline for new metrics
- **Real-world Dataset Support**: Comprehensive testing on multiple datasets
## ๐ฆ Installation
### Option 1: Install from PyPI (Recommended)
```bash
pip install cce
```
### Option 2: Install from Source
```bash
# Clone the repository
git clone https://github.com/EmorZz1G/CCE.git
cd CCE
# Install dependencies
pip install -r requirements.txt
# Install in development mode
pip install -e .
```
**ๆณจๆ**: ๆๅปบ็ธๅ
ณๆไปถไฝไบ `` ็ฎๅฝไธญใ่ฏฆ็ปๆๅปบ่ฏดๆ่ฏทๅ่ `BUILD.md`ใ
## ๐ง Requirements
- Python 3.8+
- PyTorch
- NumPy
- Other dependencies (see `requirements.txt`)
## โ๏ธ Configuration
After installation, you may need to configure the datasets path:
```bash
# Create a configuration file
cce config create
# Set your datasets directory
cce config set-datasets-path /path/to/your/datasets
# View current configuration
cce config show
```
For detailed configuration options, see [Configuration Guide](docs/CONFIGURATION_GUIDE.md).
## ๐ Quick Start
### Basic Usage
```bash
# Run baseline evaluation
. scripts/run_baseline.sh
# Run real-world dataset evaluation
. scripts/run_real_world.sh
```
### Adding New Metrics
1. **Implement the metric function** in `src/metrics/basic_metrics.py`:
```python
def metric_NewMetric(labels, scores, **kwargs):
# Your metric implementation
return metric_value
```
2. **Add evaluation logic** in `src/evaluation/eval_metrics/eval_latency_baselines.py`:
```python
elif baseline == 'NewMetric':
with timer(case_name, model_name, case_seed_new, score_seed_new, model, metric_name='NewMetric') as data_item:
result = metricor.metric_NewMetric(labels, scores)
data_item['val'] = result
```
3. **Run the evaluation**:
```bash
python src/evaluation/eval_metrics/eval_latency_baselines.py --baseline NewMetric
```
4. **View results** in `logs/NewMetric/`
## ๐๏ธ Project Structure
```
CCE/
โโโ src/ # Source code
โ โโโ metrics/ # Metric implementations
โ โโโ evaluation/ # Evaluation framework
โ โโโ models/ # Model implementations
โ โโโ data_utils/ # Data processing utilities
โ โโโ utils/ # Helper functions
โ โโโ scripts/ # Execution scripts
โโโ # Build and installation files
โ โโโ setup.py # Package setup configuration
โ โโโ pyproject.toml # Modern Python package config
โ โโโ MANIFEST.in # Package file inclusion
โ โโโ BUILD.md # Detailed build instructions
โ โโโ INSTALL.md # Quick install guide
โโโ datasets/ # Dataset storage
โโโ logs/ # Evaluation results
โโโ tests/ # Test files
โโโ docs/ # Documentation
โโโ requirements.txt # Dependencies
โโโ setup.py # Simple setup entry point
โโโ pyproject.toml # Basic build configuration
```
## ๐ Supported Evaluations
- **Latency Analysis**: Metric computation time measurement
- **Theoretical Ranking**: Validation against theoretical expectations
- **Robustness Assessment**: Noise resistance evaluation
- **Discriminative Power**: Ranking-based and value-change-ratio analysis
## ๐ Updates
- **2025-01-XX**: Project initialization and website setup
- **2025-01-XX**: Core evaluation framework implementation
- **2025-01-XX**: Multi-metric support and benchmarking
## ๐ TODO List
- [ ] Automated standard evaluation pipeline
- [ ] Enhanced robustness assessment
- [ ] Advanced discriminative power analysis
- [ ] CI/CD integration for metric testing
## ๐ค Contributing
We welcome contributions! Please feel free to submit issues and pull requests.
## ๐ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## ๐ Acknowledgments
- **FTSAD**: For providing the time series anomaly detection evaluation framework
- **TSB-AD**: For model implementation code
- **Community**: For feedback and contributions
## ๐ Contact
For questions and support, please open an issue on GitHub or contact the maintainers.
---
**CCE** - Making time series anomaly detection evaluation more reliable and comprehensive.
Raw data
{
"_id": null,
"home_page": "https://github.com/EmorZz1G/CCE",
"name": "cce",
"maintainer": "EmorZz1G",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "EmorZz1G <csemor@mail.scut.edu.cn>",
"keywords": "time-series, anomaly-detection, evaluation, metrics, machine-learning, confidence-consistency",
"author": "EmorZz1G",
"author_email": "EmorZz1G <csemor@mail.scut.edu.cn>",
"download_url": "https://files.pythonhosted.org/packages/e7/88/78e9e3e28f3dbb49f29b925df5970443e1ac7cf614d54632517a9f0ad0cc/cce-0.1.0.tar.gz",
"platform": null,
"description": "# CCE: Confidence-Consistency Evaluation for Time Series Anomaly Detection\n\n[](https://www.python.org/downloads/)\n[](LICENSE)\n[](https://pypi.org/project/cce/)\n\nA comprehensive evaluation framework for time series anomaly detection metrics, focusing on confidence-consistency evaluation, robustness assessment, and discriminative power analysis.\n\n## \ud83d\ude80 Features\n\n- **Multi-metric Evaluation**: Support for various anomaly detection metrics (F1, AUC-ROC, VUS-PR, etc.)\n- **Performance Benchmarking**: Latency analysis and theoretical ranking validation\n- **Robustness Assessment**: Noise-resistant evaluation with variance consideration\n- **Discriminative Power Analysis**: Both ranking-based and value-change-ratio-based approaches\n- **Automated Testing**: Streamlined evaluation pipeline for new metrics\n- **Real-world Dataset Support**: Comprehensive testing on multiple datasets\n\n## \ud83d\udce6 Installation\n\n### Option 1: Install from PyPI (Recommended)\n\n```bash\npip install cce\n```\n\n### Option 2: Install from Source\n\n```bash\n# Clone the repository\ngit clone https://github.com/EmorZz1G/CCE.git\ncd CCE\n\n# Install dependencies\npip install -r requirements.txt\n\n# Install in development mode\npip install -e .\n```\n\n**\u6ce8\u610f**: \u6784\u5efa\u76f8\u5173\u6587\u4ef6\u4f4d\u4e8e `` \u76ee\u5f55\u4e2d\u3002\u8be6\u7ec6\u6784\u5efa\u8bf4\u660e\u8bf7\u53c2\u8003 `BUILD.md`\u3002\n\n## \ud83d\udd27 Requirements\n\n- Python 3.8+\n- PyTorch\n- NumPy\n- Other dependencies (see `requirements.txt`)\n\n## \u2699\ufe0f Configuration\n\nAfter installation, you may need to configure the datasets path:\n\n```bash\n# Create a configuration file\ncce config create\n\n# Set your datasets directory\ncce config set-datasets-path /path/to/your/datasets\n\n# View current configuration\ncce config show\n```\n\nFor detailed configuration options, see [Configuration Guide](docs/CONFIGURATION_GUIDE.md).\n\n## \ud83d\udcda Quick Start\n\n### Basic Usage\n\n```bash\n# Run baseline evaluation\n. scripts/run_baseline.sh\n\n# Run real-world dataset evaluation\n. scripts/run_real_world.sh\n```\n\n### Adding New Metrics\n\n1. **Implement the metric function** in `src/metrics/basic_metrics.py`:\n ```python\n def metric_NewMetric(labels, scores, **kwargs):\n # Your metric implementation\n return metric_value\n ```\n\n2. **Add evaluation logic** in `src/evaluation/eval_metrics/eval_latency_baselines.py`:\n ```python\n elif baseline == 'NewMetric':\n with timer(case_name, model_name, case_seed_new, score_seed_new, model, metric_name='NewMetric') as data_item:\n result = metricor.metric_NewMetric(labels, scores)\n data_item['val'] = result\n ```\n\n3. **Run the evaluation**:\n ```bash\n python src/evaluation/eval_metrics/eval_latency_baselines.py --baseline NewMetric\n ```\n\n4. **View results** in `logs/NewMetric/`\n\n## \ud83c\udfd7\ufe0f Project Structure\n\n```\nCCE/\n\u251c\u2500\u2500 src/ # Source code\n\u2502 \u251c\u2500\u2500 metrics/ # Metric implementations\n\u2502 \u251c\u2500\u2500 evaluation/ # Evaluation framework\n\u2502 \u251c\u2500\u2500 models/ # Model implementations\n\u2502 \u251c\u2500\u2500 data_utils/ # Data processing utilities\n\u2502 \u251c\u2500\u2500 utils/ # Helper functions\n\u2502 \u2514\u2500\u2500 scripts/ # Execution scripts\n\u251c\u2500\u2500 # Build and installation files\n\u2502 \u251c\u2500\u2500 setup.py # Package setup configuration\n\u2502 \u251c\u2500\u2500 pyproject.toml # Modern Python package config\n\u2502 \u251c\u2500\u2500 MANIFEST.in # Package file inclusion\n\u2502 \u251c\u2500\u2500 BUILD.md # Detailed build instructions\n\u2502 \u2514\u2500\u2500 INSTALL.md # Quick install guide\n\u251c\u2500\u2500 datasets/ # Dataset storage\n\u251c\u2500\u2500 logs/ # Evaluation results\n\u251c\u2500\u2500 tests/ # Test files\n\u251c\u2500\u2500 docs/ # Documentation\n\u251c\u2500\u2500 requirements.txt # Dependencies\n\u251c\u2500\u2500 setup.py # Simple setup entry point\n\u2514\u2500\u2500 pyproject.toml # Basic build configuration\n```\n\n## \ud83d\udcca Supported Evaluations\n\n- **Latency Analysis**: Metric computation time measurement\n- **Theoretical Ranking**: Validation against theoretical expectations\n- **Robustness Assessment**: Noise resistance evaluation\n- **Discriminative Power**: Ranking-based and value-change-ratio analysis\n\n## \ud83d\udd04 Updates\n\n- **2025-01-XX**: Project initialization and website setup\n- **2025-01-XX**: Core evaluation framework implementation\n- **2025-01-XX**: Multi-metric support and benchmarking\n\n## \ud83d\udccb TODO List\n\n- [ ] Automated standard evaluation pipeline\n- [ ] Enhanced robustness assessment\n- [ ] Advanced discriminative power analysis\n- [ ] CI/CD integration for metric testing\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please feel free to submit issues and pull requests.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- **FTSAD**: For providing the time series anomaly detection evaluation framework\n- **TSB-AD**: For model implementation code\n- **Community**: For feedback and contributions\n\n## \ud83d\udcde Contact\n\nFor questions and support, please open an issue on GitHub or contact the maintainers.\n\n---\n\n**CCE** - Making time series anomaly detection evaluation more reliable and comprehensive.\n",
"bugtrack_url": null,
"license": null,
"summary": "Confidence-Consistency Evaluation for Time Series Anomaly Detection",
"version": "0.1.0",
"project_urls": {
"Changelog": "https://github.com/EmorZz1G/CCE/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/EmorZz1G/CCE#readme",
"Homepage": "https://github.com/EmorZz1G/CCE",
"Issues": "https://github.com/EmorZz1G/CCE/issues",
"Repository": "https://github.com/EmorZz1G/CCE.git"
},
"split_keywords": [
"time-series",
" anomaly-detection",
" evaluation",
" metrics",
" machine-learning",
" confidence-consistency"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5f877ee65a6080690ead0cedd382918dd5b62782003f9c4cc9adc86d352b758f",
"md5": "e1e07f5585c58115af762cde0937b3de",
"sha256": "3b42f3640e6d4df28d5ecd28cb0b4ac553d59fddeb16a1f062d6aae3f6c0b8ef"
},
"downloads": -1,
"filename": "cce-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e1e07f5585c58115af762cde0937b3de",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 272951,
"upload_time": "2025-09-01T04:20:18",
"upload_time_iso_8601": "2025-09-01T04:20:18.748340Z",
"url": "https://files.pythonhosted.org/packages/5f/87/7ee65a6080690ead0cedd382918dd5b62782003f9c4cc9adc86d352b758f/cce-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e78878e9e3e28f3dbb49f29b925df5970443e1ac7cf614d54632517a9f0ad0cc",
"md5": "d01a565cfbb64fae97b4d75a59d4a31b",
"sha256": "ba64e2fc8cf076c2a65d131e4293bb2a50e87b2fce3628c52a86280c6f3f73e4"
},
"downloads": -1,
"filename": "cce-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "d01a565cfbb64fae97b4d75a59d4a31b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 229548,
"upload_time": "2025-09-01T04:20:21",
"upload_time_iso_8601": "2025-09-01T04:20:21.082990Z",
"url": "https://files.pythonhosted.org/packages/e7/88/78e9e3e28f3dbb49f29b925df5970443e1ac7cf614d54632517a9f0ad0cc/cce-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-01 04:20:21",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "EmorZz1G",
"github_project": "CCE",
"github_not_found": true,
"lcname": "cce"
}