# Fractal-Attention Analysis (FAA) Framework
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://badge.fury.io/py/fractal-attention-analysis)
A mathematical framework for analyzing transformer attention mechanisms using fractal geometry and golden ratio transformations. FAA provides deep insights into how Large Language Models (LLMs) process and attend to information.
## π Features
- **Universal LLM Support**: Works with any HuggingFace transformer model
- **Fractal Dimension Analysis**: Compute fractal dimensions of attention patterns
- **Golden Ratio Transformations**: Apply Ο-based transformations for enhanced interpretability
- **Comprehensive Metrics**: Entropy, sparsity, concentration, and custom interpretability scores
- **Rich Visualizations**: Beautiful matplotlib-based attention pattern visualizations
- **CLI Interface**: Easy-to-use command-line tools
- **Modular Design**: Clean OOP architecture for easy extension
- **GPU Acceleration**: Efficient CUDA support with automatic memory management
## π Key Findings
Our research demonstrates:
- **Universal Fractal Signature**: Consistent fractal dimension (β2.0295) across diverse architectures (GPT-2, Qwen, Llama, Gemma)
- **Architectural Independence**: Fractal patterns persist despite model size and design differences
- **Real-time Analysis**: Sub-second performance for practical deployment
## π Quick Start
### Installation
```bash
pip install fractal-attention-analysis
```
### Basic Usage
```python
from fractal_attention_analysis import FractalAttentionAnalyzer
# Initialize analyzer with any HuggingFace model
analyzer = FractalAttentionAnalyzer("gpt2")
# Analyze text
results = analyzer.analyze("The golden ratio appears in nature and mathematics.")
# Access results
print(f"Fractal Dimension: {results['fractal_dimension']:.4f}")
print(f"Metrics: {results['metrics']}")
```
### Command Line Interface
```bash
# Analyze text with GPT-2
faa-analyze --model gpt2 --text "Hello world"
# Analyze with visualization
faa-analyze --model meta-llama/Llama-3.2-1B \
--text "AI is transforming the world" \
--save-viz ./output
# Compare two models
faa-compare --model1 gpt2 --model2 distilgpt2 \
--text "Test sentence"
```
## π Documentation
### Core Components
#### FractalAttentionAnalyzer
Main class for performing fractal-attention analysis:
```python
analyzer = FractalAttentionAnalyzer(
model_name="gpt2", # HuggingFace model ID
device_manager=None, # Optional custom device manager
force_eager_attention=True, # Force eager attention for compatibility
)
# Analyze text
results = analyzer.analyze(
text="Your input text",
layer_idx=-1, # Layer to analyze (-1 = last)
head_idx=0, # Attention head index
return_visualizations=True, # Generate plots
save_dir=Path("./output") # Save visualizations
)
```
#### FractalTransforms
Fractal transformation and dimension calculations:
```python
from fractal_attention_analysis import FractalTransforms
transforms = FractalTransforms()
# Compute fractal dimension
dimension = transforms.compute_fractal_dimension(attention_matrix)
# Apply fractal interpolation
transformed = transforms.fractal_interpolation_function(attention_matrix)
# Golden ratio scoring
scored = transforms.golden_ratio_scoring(attention_matrix)
```
#### AttentionMetrics
Comprehensive attention metrics:
```python
from fractal_attention_analysis import AttentionMetrics
metrics = AttentionMetrics()
# Compute all metrics
all_metrics = metrics.compute_all_metrics(
attention_matrix,
fractal_dimension=2.0295
)
# Individual metrics
entropy = metrics.compute_entropy(attention_matrix)
sparsity = metrics.compute_sparsity(attention_matrix)
concentration = metrics.compute_concentration(attention_matrix)
```
#### AttentionVisualizer
Visualization utilities:
```python
from fractal_attention_analysis import AttentionVisualizer
visualizer = AttentionVisualizer()
# Plot attention matrix
fig = visualizer.plot_attention_matrix(
attention_matrix,
tokens=["Hello", "world"],
title="Attention Pattern"
)
# Plot fractal comparison
fig = visualizer.plot_fractal_comparison(
original_attention,
transformed_attention
)
```
### Advanced Usage
#### Batch Analysis
```python
texts = [
"First sentence to analyze.",
"Second sentence to analyze.",
"Third sentence to analyze."
]
results = analyzer.analyze_batch(texts)
```
#### Model Comparison
```python
comparison = analyzer.compare_models(
other_model_name="distilgpt2",
text="Compare attention patterns"
)
print(f"Dimension difference: {comparison['dimension_difference']:.4f}")
```
#### Export Results
```python
# Export as JSON
analyzer.export_results(results, "output.json", format='json')
# Export as CSV
analyzer.export_results(results, "output.csv", format='csv')
# Export as NumPy archive
analyzer.export_results(results, "output.npz", format='npz')
```
## π¬ Mathematical Foundation
The FAA framework is based on:
1. **Golden Ratio (Ο)**: Used for optimal attention partitioning
```
Ο = (1 + β5) / 2 β 1.618
```
2. **Fractal Dimension**: Computed using box-counting method
```
D = lim(Ξ΅β0) [log N(Ξ΅) / log(1/Ξ΅)]
```
3. **Fractal Interpolation**: Iterated Function System (IFS) transformations
```
F(x) = Ξ£ wα΅’ Β· fα΅’(x)
```
4. **Neural Fractal Dimension**: Theoretical dimension for neural attention
```
D_neural = ΟΒ² / 2 β 1.309
```
## π Performance
- **Analysis Time**: 0.047-0.248s depending on model size
- **Memory Efficient**: Supports models up to 1B parameters on 24GB GPU
- **Universal**: Works with GPT, BERT, T5, LLaMA, Qwen, Gemma, and more
## π οΈ Development
### Setup Development Environment
```bash
# Clone repository
git clone https://github.com/ross-sec/fractal_attention_analysis.git
cd fractal-attention-analysis
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
```
### Running Tests
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=fractal_attention_analysis --cov-report=html
# Run specific test file
pytest tests/test_core.py
```
### Code Quality
```bash
# Format code
black src/ tests/
# Sort imports
isort src/ tests/
# Lint
flake8 src/ tests/
# Type check
mypy src/
```
## π Citation
If you use FAA in your research, please cite:
```bibtex
@software{ross2025faa,
title={Fractal-Attention Analysis: A Mathematical Framework for LLM Interpretability},
author={Ross, Andre and Ross, Leorah and Atias, Eyal},
year={2025},
url={https://github.com/ross-sec/fractal_attention_analysis}
}
```
## π€ Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Areas for Contribution
- Support for additional model architectures
- New fractal transformation methods
- Enhanced visualization capabilities
- Performance optimizations
- Documentation improvements
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## π₯ Authors
- **Andre Ross** - *Lead Developer* - [Ross Technologies](mailto:devops.ross@gmail.com)
- **Leorah Ross** - *Co-Developer* - [Ross Technologies](mailto:leorah@ross-developers.com)
- **Eyal Atias** - *Co-Developer* - [Hooking LTD](mailto:eyal@hooking.co.il)
## π Acknowledgments
- HuggingFace team for the Transformers library
- The open-source AI research community
- Fractal geometry pioneers: Benoit Mandelbrot, Michael Barnsley
## π Support
- **Issues**: [GitHub Issues](https://github.com/ross-sec/fractal_attention_analysis/issues)
- **Discussions**: [GitHub Discussions](https://github.com/ross-sec/fractal_attention_analysis/discussions)
- **Email**: devops.ross@gmail.com
## πΊοΈ Roadmap
- [ ] Support for multi-head parallel analysis
- [ ] CUDA-optimized fractal computations
- [ ] Real-time streaming analysis
- [ ] Interactive web dashboard
- [ ] Integration with popular interpretability tools (SHAP, LIME)
- [ ] Extended model zoo with pre-computed benchmarks
---
**Made with β€οΈ by Ross Technologies & Hooking LTD**
## Star History
[](https://www.star-history.com/#fractal_attention_analysis/fractal_attention_analysis&type=timeline&legend=top-left)
Raw data
{
"_id": null,
"home_page": null,
"name": "fractal-attention-analysis",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Andre Ross <devops.ross@gmail.com>",
"keywords": "attention-mechanism, deep-learning, explainable-ai, fractal-analysis, interpretability, llm, natural-language-processing, nlp, transformer, xai",
"author": null,
"author_email": "Andre Ross <devops.ross@gmail.com>, Leorah Ross <leorah@ross-developers.com>, Eyal Atias <eyal@hooking.co.il>",
"download_url": "https://files.pythonhosted.org/packages/c3/96/c97dab6737622a52c51d79dded8a4ef946002bfe7e24d23cd4c8d0ddee17/fractal_attention_analysis-1.0.0.tar.gz",
"platform": null,
"description": "# Fractal-Attention Analysis (FAA) Framework\n\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n[](https://badge.fury.io/py/fractal-attention-analysis)\n\nA mathematical framework for analyzing transformer attention mechanisms using fractal geometry and golden ratio transformations. FAA provides deep insights into how Large Language Models (LLMs) process and attend to information.\n\n\n## \ud83c\udf1f Features\n\n- **Universal LLM Support**: Works with any HuggingFace transformer model\n- **Fractal Dimension Analysis**: Compute fractal dimensions of attention patterns\n- **Golden Ratio Transformations**: Apply \u03c6-based transformations for enhanced interpretability\n- **Comprehensive Metrics**: Entropy, sparsity, concentration, and custom interpretability scores\n- **Rich Visualizations**: Beautiful matplotlib-based attention pattern visualizations\n- **CLI Interface**: Easy-to-use command-line tools\n- **Modular Design**: Clean OOP architecture for easy extension\n- **GPU Acceleration**: Efficient CUDA support with automatic memory management\n\n## \ud83d\udcca Key Findings\n\nOur research demonstrates:\n- **Universal Fractal Signature**: Consistent fractal dimension (\u22482.0295) across diverse architectures (GPT-2, Qwen, Llama, Gemma)\n- **Architectural Independence**: Fractal patterns persist despite model size and design differences\n- **Real-time Analysis**: Sub-second performance for practical deployment\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npip install fractal-attention-analysis\n```\n\n### Basic Usage\n\n```python\nfrom fractal_attention_analysis import FractalAttentionAnalyzer\n\n# Initialize analyzer with any HuggingFace model\nanalyzer = FractalAttentionAnalyzer(\"gpt2\")\n\n# Analyze text\nresults = analyzer.analyze(\"The golden ratio appears in nature and mathematics.\")\n\n# Access results\nprint(f\"Fractal Dimension: {results['fractal_dimension']:.4f}\")\nprint(f\"Metrics: {results['metrics']}\")\n```\n\n### Command Line Interface\n\n```bash\n# Analyze text with GPT-2\nfaa-analyze --model gpt2 --text \"Hello world\"\n\n# Analyze with visualization\nfaa-analyze --model meta-llama/Llama-3.2-1B \\\n --text \"AI is transforming the world\" \\\n --save-viz ./output\n\n# Compare two models\nfaa-compare --model1 gpt2 --model2 distilgpt2 \\\n --text \"Test sentence\"\n```\n\n## \ud83d\udcda Documentation\n\n### Core Components\n\n#### FractalAttentionAnalyzer\n\nMain class for performing fractal-attention analysis:\n\n```python\nanalyzer = FractalAttentionAnalyzer(\n model_name=\"gpt2\", # HuggingFace model ID\n device_manager=None, # Optional custom device manager\n force_eager_attention=True, # Force eager attention for compatibility\n)\n\n# Analyze text\nresults = analyzer.analyze(\n text=\"Your input text\",\n layer_idx=-1, # Layer to analyze (-1 = last)\n head_idx=0, # Attention head index\n return_visualizations=True, # Generate plots\n save_dir=Path(\"./output\") # Save visualizations\n)\n```\n\n#### FractalTransforms\n\nFractal transformation and dimension calculations:\n\n```python\nfrom fractal_attention_analysis import FractalTransforms\n\ntransforms = FractalTransforms()\n\n# Compute fractal dimension\ndimension = transforms.compute_fractal_dimension(attention_matrix)\n\n# Apply fractal interpolation\ntransformed = transforms.fractal_interpolation_function(attention_matrix)\n\n# Golden ratio scoring\nscored = transforms.golden_ratio_scoring(attention_matrix)\n```\n\n#### AttentionMetrics\n\nComprehensive attention metrics:\n\n```python\nfrom fractal_attention_analysis import AttentionMetrics\n\nmetrics = AttentionMetrics()\n\n# Compute all metrics\nall_metrics = metrics.compute_all_metrics(\n attention_matrix,\n fractal_dimension=2.0295\n)\n\n# Individual metrics\nentropy = metrics.compute_entropy(attention_matrix)\nsparsity = metrics.compute_sparsity(attention_matrix)\nconcentration = metrics.compute_concentration(attention_matrix)\n```\n\n#### AttentionVisualizer\n\nVisualization utilities:\n\n```python\nfrom fractal_attention_analysis import AttentionVisualizer\n\nvisualizer = AttentionVisualizer()\n\n# Plot attention matrix\nfig = visualizer.plot_attention_matrix(\n attention_matrix,\n tokens=[\"Hello\", \"world\"],\n title=\"Attention Pattern\"\n)\n\n# Plot fractal comparison\nfig = visualizer.plot_fractal_comparison(\n original_attention,\n transformed_attention\n)\n```\n\n### Advanced Usage\n\n#### Batch Analysis\n\n```python\ntexts = [\n \"First sentence to analyze.\",\n \"Second sentence to analyze.\",\n \"Third sentence to analyze.\"\n]\n\nresults = analyzer.analyze_batch(texts)\n```\n\n#### Model Comparison\n\n```python\ncomparison = analyzer.compare_models(\n other_model_name=\"distilgpt2\",\n text=\"Compare attention patterns\"\n)\n\nprint(f\"Dimension difference: {comparison['dimension_difference']:.4f}\")\n```\n\n#### Export Results\n\n```python\n# Export as JSON\nanalyzer.export_results(results, \"output.json\", format='json')\n\n# Export as CSV\nanalyzer.export_results(results, \"output.csv\", format='csv')\n\n# Export as NumPy archive\nanalyzer.export_results(results, \"output.npz\", format='npz')\n```\n\n## \ud83d\udd2c Mathematical Foundation\n\nThe FAA framework is based on:\n\n1. **Golden Ratio (\u03c6)**: Used for optimal attention partitioning\n ```\n \u03c6 = (1 + \u221a5) / 2 \u2248 1.618\n ```\n\n2. **Fractal Dimension**: Computed using box-counting method\n ```\n D = lim(\u03b5\u21920) [log N(\u03b5) / log(1/\u03b5)]\n ```\n\n3. **Fractal Interpolation**: Iterated Function System (IFS) transformations\n ```\n F(x) = \u03a3 w\u1d62 \u00b7 f\u1d62(x)\n ```\n\n4. **Neural Fractal Dimension**: Theoretical dimension for neural attention\n ```\n D_neural = \u03c6\u00b2 / 2 \u2248 1.309\n ```\n\n## \ud83d\udcc8 Performance\n\n- **Analysis Time**: 0.047-0.248s depending on model size\n- **Memory Efficient**: Supports models up to 1B parameters on 24GB GPU\n- **Universal**: Works with GPT, BERT, T5, LLaMA, Qwen, Gemma, and more\n\n## \ud83d\udee0\ufe0f Development\n\n### Setup Development Environment\n\n```bash\n# Clone repository\ngit clone https://github.com/ross-sec/fractal_attention_analysis.git\ncd fractal-attention-analysis\n\n# Create virtual environment\npython -m venv venv\nsource venv/bin/activate # On Windows: venv\\Scripts\\activate\n\n# Install in development mode\npip install -e \".[dev]\"\n\n# Install pre-commit hooks\npre-commit install\n```\n\n### Running Tests\n\n```bash\n# Run all tests\npytest\n\n# Run with coverage\npytest --cov=fractal_attention_analysis --cov-report=html\n\n# Run specific test file\npytest tests/test_core.py\n```\n\n### Code Quality\n\n```bash\n# Format code\nblack src/ tests/\n\n# Sort imports\nisort src/ tests/\n\n# Lint\nflake8 src/ tests/\n\n# Type check\nmypy src/\n```\n\n## \ud83d\udcd6 Citation\n\nIf you use FAA in your research, please cite:\n\n```bibtex\n@software{ross2025faa,\n title={Fractal-Attention Analysis: A Mathematical Framework for LLM Interpretability},\n author={Ross, Andre and Ross, Leorah and Atias, Eyal},\n year={2025},\n url={https://github.com/ross-sec/fractal_attention_analysis}\n}\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n### Areas for Contribution\n\n- Support for additional model architectures\n- New fractal transformation methods\n- Enhanced visualization capabilities\n- Performance optimizations\n- Documentation improvements\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\udc65 Authors\n\n- **Andre Ross** - *Lead Developer* - [Ross Technologies](mailto:devops.ross@gmail.com)\n- **Leorah Ross** - *Co-Developer* - [Ross Technologies](mailto:leorah@ross-developers.com)\n- **Eyal Atias** - *Co-Developer* - [Hooking LTD](mailto:eyal@hooking.co.il)\n\n## \ud83d\ude4f Acknowledgments\n\n- HuggingFace team for the Transformers library\n- The open-source AI research community\n- Fractal geometry pioneers: Benoit Mandelbrot, Michael Barnsley\n\n## \ud83d\udcde Support\n\n- **Issues**: [GitHub Issues](https://github.com/ross-sec/fractal_attention_analysis/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/ross-sec/fractal_attention_analysis/discussions)\n- **Email**: devops.ross@gmail.com\n\n## \ud83d\uddfa\ufe0f Roadmap\n\n- [ ] Support for multi-head parallel analysis\n- [ ] CUDA-optimized fractal computations\n- [ ] Real-time streaming analysis\n- [ ] Interactive web dashboard\n- [ ] Integration with popular interpretability tools (SHAP, LIME)\n- [ ] Extended model zoo with pre-computed benchmarks\n\n---\n\n**Made with \u2764\ufe0f by Ross Technologies & Hooking LTD**\n\n\n\n\n\n## Star History\n\n[](https://www.star-history.com/#fractal_attention_analysis/fractal_attention_analysis&type=timeline&legend=top-left)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Fractal-Attention Analysis (FAA) Framework for LLM Interpretability using Golden Ratio Transformations",
"version": "1.0.0",
"project_urls": {
"Bug Tracker": "https://github.com/ross-sec/fractal_attention_analysis/issues",
"Changelog": "https://github.com/ross-sec/fractal_attention_analysis/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/ross-sec/fractal_attention_analysis#readme",
"Homepage": "https://github.com/ross-sec/fractal_attention_analysis",
"Repository": "https://github.com/ross-sec/fractal_attention_analysis"
},
"split_keywords": [
"attention-mechanism",
" deep-learning",
" explainable-ai",
" fractal-analysis",
" interpretability",
" llm",
" natural-language-processing",
" nlp",
" transformer",
" xai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9b157c86776944ac472b7339a66404f413e3835801e5c32b7796606f62678939",
"md5": "38d7d2ae01fc2d4c3b3d358b872f53fe",
"sha256": "ea5e3d06a60652d84fb4d38e1d9afb15c19d347590189038c683009d64a10d50"
},
"downloads": -1,
"filename": "fractal_attention_analysis-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "38d7d2ae01fc2d4c3b3d358b872f53fe",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 21778,
"upload_time": "2025-10-26T00:09:47",
"upload_time_iso_8601": "2025-10-26T00:09:47.012529Z",
"url": "https://files.pythonhosted.org/packages/9b/15/7c86776944ac472b7339a66404f413e3835801e5c32b7796606f62678939/fractal_attention_analysis-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c396c97dab6737622a52c51d79dded8a4ef946002bfe7e24d23cd4c8d0ddee17",
"md5": "31d206865ad6d5e76273e76cdfbfe0a0",
"sha256": "030ad2cc5675d2423c51ef1e0f36fc8dfb869d4a0a0eb0fbd48d18be54905160"
},
"downloads": -1,
"filename": "fractal_attention_analysis-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "31d206865ad6d5e76273e76cdfbfe0a0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 32538,
"upload_time": "2025-10-26T00:09:48",
"upload_time_iso_8601": "2025-10-26T00:09:48.548729Z",
"url": "https://files.pythonhosted.org/packages/c3/96/c97dab6737622a52c51d79dded8a4ef946002bfe7e24d23cd4c8d0ddee17/fractal_attention_analysis-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-26 00:09:48",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ross-sec",
"github_project": "fractal_attention_analysis",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "fractal-attention-analysis"
}