Name | src2md JSON |
Version |
2.1.0
JSON |
| download |
home_page | https://github.com/queelius/src2md |
Summary | Convert source code to structured, context-optimized markdown for LLMs with intelligent summarization |
upload_time | 2025-09-04 06:48:28 |
maintainer | None |
docs_url | None |
author | Alex Towell |
requires_python | >=3.8 |
license | MIT License
Copyright (c) 2025 Alex Towell
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
keywords |
markdown
source code
documentation
llm
context-window
code-analysis
summarization
ast
code-compression
|
VCS |
 |
bugtrack_url |
|
requirements |
pypandoc
pathspec
rich
tiktoken
pyyaml
astroid
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# src2md




**src2md** is a powerful tool that converts source code repositories into structured, context-window-optimized representations for Large Language Models (LLMs). It addresses the fundamental challenge of fitting meaningful codebases into limited context windows while preserving the most important information through intelligent summarization, AST-based analysis, and optional LLM-powered compression.
## 🚀 **Features**
### New in v2.0
- **🎯 Context Window Optimization**: Intelligently fit codebases into LLM context windows with smart truncation
- **📝 Intelligent Summarization**: AST-based code analysis with multiple compression levels
- **🤖 LLM-Powered Compression**: Optional OpenAI/Anthropic integration for semantic summarization
- **⚡ Fluent API**: Elegant method chaining with new summarization methods
- **📊 File Importance Scoring**: Multi-factor analysis to prioritize critical files
- **🪟 Predefined LLM Windows**: Built-in support for GPT-4, Claude, and more
- **🔄 Progressive Summarization**: Multi-tier compression strategies for different file types
### Core Features
- **Multiple Output Formats**: JSON, JSONL, Markdown, HTML, and plain text
- **Smart Token Management**: Accurate token counting with tiktoken and structure-aware truncation
- **Multi-Language Support**: Specialized summarizers for Python, JavaScript, TypeScript, JSON, YAML
- **Code Statistics**: Automatic generation of project metrics and complexity analysis
- **Flexible Filtering**: Customizable include/exclude patterns
- **Rich CLI Interface**: Beautiful progress indicators and colored output
## 📦 **Installation**
Install via [PyPI](https://pypi.org/) using `pip`:
```bash
pip install src2md
```
## 🛠️ **Usage**
### Quick Start - Fluent API
```python
from src2md import Repository, ContextWindow
# Basic usage
output = Repository("/path/to/project").analyze().to_markdown()
# Optimize for GPT-4 context window
output = (Repository("/path/to/project")
.optimize_for(ContextWindow.GPT_4)
.analyze()
.to_markdown())
# Full fluent API with all features
result = (Repository("/path/to/project")
.name("MyProject")
.branch("main")
.include("src/", "lib/")
.exclude("tests/", "*.log")
.with_importance_scoring()
.with_summarization(
compression_ratio=0.3, # Target 30% of original size
preserve_important=True, # Keep critical files intact
use_llm=True # Use LLM if available
)
.prioritize(["main.py", "core/"])
.optimize_for_tokens(100_000) # 100K token limit
.analyze()
.to_json(pretty=True))
```
### Command Line Interface
```bash
# Basic markdown generation
src2md /path/to/project -o documentation.md
# With context optimization
src2md /path/to/project --gpt4 -o optimized.md
src2md /path/to/project --claude3 --importance
# With intelligent summarization
src2md /path/to/project --summarize --compression-ratio 0.3
src2md /path/to/project --summarize-tests --summarize-docs
# With LLM-powered summarization (requires API key)
src2md /path/to/project --use-llm --llm-model gpt-3.5-turbo
# Multiple output formats
src2md /path/to/project --format json --pretty
src2md /path/to/project --format html -o docs.html
```
### Python API Examples
#### Basic Context Optimization
```python
from src2md import Repository, ContextWindow
# Optimize for different LLM context windows
repo = Repository("./my-project")
output = repo.optimize_for(ContextWindow.CLAUDE_3).analyze().to_markdown()
# Custom token limit with importance scoring
repo = (Repository("./my-project")
.with_importance_scoring()
.optimize_for_tokens(50_000)
.analyze())
```
#### Intelligent Summarization
```python
# Enable smart summarization with compression
repo = (Repository("./my-project")
.with_summarization(
compression_ratio=0.3, # Compress to 30% of original
preserve_important=True, # Keep critical files intact
use_llm=False # Use AST-based summarization
)
.optimize_for(ContextWindow.GPT_4)
.analyze())
# Use LLM-powered summarization (requires API key)
import os
os.environ['OPENAI_API_KEY'] = 'your-key-here'
repo = (Repository("./my-project")
.with_summarization(
compression_ratio=0.2, # More aggressive compression
use_llm=True,
llm_model="gpt-3.5-turbo"
)
.analyze())
```
#### Multi-Tier Compression Strategy
```python
# Configure different summarization levels for different file types
repo = (Repository("./my-project")
.with_importance_scoring()
.prioritize(["src/core/", "api/"]) # Critical paths
.summarize_tests() # Compress test files
.summarize_docs() # Compress documentation
.with_summarization(
compression_ratio=0.25,
preserve_important=True
)
.optimize_for_tokens(100_000)
.analyze())
# Access summarization metadata
data = repo.to_dict()
for file in data['source_files']:
if file.get('was_summarized'):
print(f"Summarized {file['path']}: {file['original_size']} -> {file['size']} bytes")
```
#### Generate Multiple Formats
```python
repo = Repository("./my-project").analyze()
markdown = repo.to_markdown()
json_data = repo.to_json()
html_doc = repo.to_html()
# Access raw data
data = repo.to_dict()
print(f"Files: {data['metadata']['file_count']}")
print(f"Token usage: {data['metadata'].get('total_tokens', 0)}")
print(f"Compression achieved: {data['metadata'].get('compression_ratio', 1.0):.1%}")
```
## 🎯 **Summarization Features**
### AST-Based Python Summarization
src2md uses Abstract Syntax Tree (AST) analysis to intelligently summarize Python code while preserving structure:
- **MINIMAL**: Only class/function signatures
- **OUTLINE**: Signatures with structural hierarchy
- **DOCSTRINGS**: Signatures plus documentation
- **SIGNATURES**: Full signatures with type hints
- **FULL**: No summarization
### Multi-Language Support
Specialized summarizers for different file types:
- **Python**: AST-based analysis with import/export preservation
- **JavaScript/TypeScript**: Function and class extraction
- **JSON/YAML**: Schema extraction with sample data
- **Test Files**: Test name and assertion extraction
- **Documentation**: Heading and key point extraction
### Smart Truncation
When files must be truncated to fit token limits:
1. Preserves code structure (complete functions/classes)
2. Maintains syntax validity
3. Prioritizes public APIs over private methods
4. Keeps imports and exports intact
### LLM-Powered Summarization
Optional integration with OpenAI and Anthropic for semantic compression:
```bash
# Set API keys
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
# Use LLM summarization
src2md /path/to/project --use-llm --llm-model gpt-3.5-turbo
src2md /path/to/project --use-llm --llm-model claude-3-haiku-20240307
```
## 📊 **Output Formats**
### JSON
Structured data perfect for programmatic processing:
```json
{
"metadata": {
"project_name": "my-project",
"generated_at": "2025-01-01T12:00:00",
"patterns": {...}
},
"statistics": {
"total_files": 42,
"languages": {"python": {"count": 15, "total_size": 50000}},
"project_complexity": 3.2
},
"documentation": [...],
"source_files": [...]
}
```
### JSONL
One JSON object per line - perfect for streaming and big data tools:
```jsonl
{"type": "metadata", "data": {...}}
{"type": "statistics", "data": {...}}
{"type": "source_file", "data": {...}}
```
### HTML
Beautiful, styled documentation ready for the web with syntax highlighting and responsive design.
### Markdown
Clean, readable documentation compatible with GitHub, GitLab, and other platforms.
## 🔧 **Advanced Options**
### File Patterns
```bash
# Custom documentation patterns
src2md project --doc-pat '*.md' '*.rst' '*.txt'
# Specific source file types
src2md project --src-pat '*.py' '*.js' '*.ts'
# Ignore patterns
src2md project --ignore-pat '*.pyc' 'node_modules/' '.git/'
```
### Ignore Files
Create a `.src2mdignore` file in your project root:
```gitignore
# Dependencies
node_modules/
__pycache__/
*.pyc
# Build outputs
dist/
build/
*.egg-info/
# IDE files
.vscode/
.idea/
```
### Configuration
```bash
# Use custom ignore file
src2md project --ignore-file .gitignore
# Disable statistics
src2md project --no-stats
# Metadata only (no file contents)
src2md project --no-content
```
## 🎯 **Use Cases**
- **LLM Context**: Generate structured context for AI/ML models
- **Documentation**: Create beautiful project documentation
- **Code Analysis**: Extract metrics and statistics from codebases
- **Data Export**: Convert code to structured formats for analysis
- **Archive**: Create comprehensive snapshots of projects
- **CI/CD**: Generate documentation automatically in build pipelines
## 📈 **Statistics & Metrics**
src2md automatically generates:
- **File Metrics**: Counts by type and language
- **Code Complexity**: Cyclomatic complexity scores
- **Token Usage**: Actual token counts for LLM context
- **Compression Stats**: Before/after summarization metrics
- **Importance Scores**: File prioritization rankings
- **Language Breakdown**: Distribution of code by language
- **Structure Analysis**: Dependency and module relationships
## 🤝 **Migration from v0.x**
The new version is backward compatible. Existing commands work unchanged:
```bash
# This still works exactly as before
src2md project -o docs.md --doc-pat '*.md' --src-pat '*.py'
```
New features are opt-in through additional flags and the Python API.
## 📄 **License**
MIT License - see [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/queelius/src2md",
"name": "src2md",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "markdown, source code, documentation, llm, context-window, code-analysis, summarization, ast, code-compression",
"author": "Alex Towell",
"author_email": "Alex Towell <lex@metafunctor.com>",
"download_url": "https://files.pythonhosted.org/packages/ad/2c/d9427f53c52ed0048671c0d0c6f7fafd4e287ec7f79feaf3dea62e2b9efe/src2md-2.1.0.tar.gz",
"platform": null,
"description": "# src2md\n\n\n\n\n\n\n**src2md** is a powerful tool that converts source code repositories into structured, context-window-optimized representations for Large Language Models (LLMs). It addresses the fundamental challenge of fitting meaningful codebases into limited context windows while preserving the most important information through intelligent summarization, AST-based analysis, and optional LLM-powered compression.\n\n## \ud83d\ude80 **Features**\n\n### New in v2.0\n- **\ud83c\udfaf Context Window Optimization**: Intelligently fit codebases into LLM context windows with smart truncation\n- **\ud83d\udcdd Intelligent Summarization**: AST-based code analysis with multiple compression levels\n- **\ud83e\udd16 LLM-Powered Compression**: Optional OpenAI/Anthropic integration for semantic summarization\n- **\u26a1 Fluent API**: Elegant method chaining with new summarization methods\n- **\ud83d\udcca File Importance Scoring**: Multi-factor analysis to prioritize critical files\n- **\ud83e\ude9f Predefined LLM Windows**: Built-in support for GPT-4, Claude, and more\n- **\ud83d\udd04 Progressive Summarization**: Multi-tier compression strategies for different file types\n\n### Core Features\n- **Multiple Output Formats**: JSON, JSONL, Markdown, HTML, and plain text\n- **Smart Token Management**: Accurate token counting with tiktoken and structure-aware truncation\n- **Multi-Language Support**: Specialized summarizers for Python, JavaScript, TypeScript, JSON, YAML\n- **Code Statistics**: Automatic generation of project metrics and complexity analysis\n- **Flexible Filtering**: Customizable include/exclude patterns\n- **Rich CLI Interface**: Beautiful progress indicators and colored output\n\n## \ud83d\udce6 **Installation**\n\nInstall via [PyPI](https://pypi.org/) using `pip`:\n\n```bash\npip install src2md\n```\n\n## \ud83d\udee0\ufe0f **Usage**\n\n### Quick Start - Fluent API\n\n```python\nfrom src2md import Repository, ContextWindow\n\n# Basic usage\noutput = Repository(\"/path/to/project\").analyze().to_markdown()\n\n# Optimize for GPT-4 context window\noutput = (Repository(\"/path/to/project\")\n .optimize_for(ContextWindow.GPT_4)\n .analyze()\n .to_markdown())\n\n# Full fluent API with all features\nresult = (Repository(\"/path/to/project\")\n .name(\"MyProject\")\n .branch(\"main\")\n .include(\"src/\", \"lib/\")\n .exclude(\"tests/\", \"*.log\")\n .with_importance_scoring()\n .with_summarization(\n compression_ratio=0.3, # Target 30% of original size\n preserve_important=True, # Keep critical files intact\n use_llm=True # Use LLM if available\n )\n .prioritize([\"main.py\", \"core/\"])\n .optimize_for_tokens(100_000) # 100K token limit\n .analyze()\n .to_json(pretty=True))\n```\n\n### Command Line Interface\n\n```bash\n# Basic markdown generation\nsrc2md /path/to/project -o documentation.md\n\n# With context optimization\nsrc2md /path/to/project --gpt4 -o optimized.md\nsrc2md /path/to/project --claude3 --importance\n\n# With intelligent summarization\nsrc2md /path/to/project --summarize --compression-ratio 0.3\nsrc2md /path/to/project --summarize-tests --summarize-docs\n\n# With LLM-powered summarization (requires API key)\nsrc2md /path/to/project --use-llm --llm-model gpt-3.5-turbo\n\n# Multiple output formats\nsrc2md /path/to/project --format json --pretty\nsrc2md /path/to/project --format html -o docs.html\n```\n\n### Python API Examples\n\n#### Basic Context Optimization\n\n```python\nfrom src2md import Repository, ContextWindow\n\n# Optimize for different LLM context windows\nrepo = Repository(\"./my-project\")\noutput = repo.optimize_for(ContextWindow.CLAUDE_3).analyze().to_markdown()\n\n# Custom token limit with importance scoring\nrepo = (Repository(\"./my-project\")\n .with_importance_scoring()\n .optimize_for_tokens(50_000)\n .analyze())\n```\n\n#### Intelligent Summarization\n\n```python\n# Enable smart summarization with compression\nrepo = (Repository(\"./my-project\")\n .with_summarization(\n compression_ratio=0.3, # Compress to 30% of original\n preserve_important=True, # Keep critical files intact\n use_llm=False # Use AST-based summarization\n )\n .optimize_for(ContextWindow.GPT_4)\n .analyze())\n\n# Use LLM-powered summarization (requires API key)\nimport os\nos.environ['OPENAI_API_KEY'] = 'your-key-here'\n\nrepo = (Repository(\"./my-project\")\n .with_summarization(\n compression_ratio=0.2, # More aggressive compression\n use_llm=True,\n llm_model=\"gpt-3.5-turbo\"\n )\n .analyze())\n```\n\n#### Multi-Tier Compression Strategy\n\n```python\n# Configure different summarization levels for different file types\nrepo = (Repository(\"./my-project\")\n .with_importance_scoring()\n .prioritize([\"src/core/\", \"api/\"]) # Critical paths\n .summarize_tests() # Compress test files\n .summarize_docs() # Compress documentation\n .with_summarization(\n compression_ratio=0.25,\n preserve_important=True\n )\n .optimize_for_tokens(100_000)\n .analyze())\n\n# Access summarization metadata\ndata = repo.to_dict()\nfor file in data['source_files']:\n if file.get('was_summarized'):\n print(f\"Summarized {file['path']}: {file['original_size']} -> {file['size']} bytes\")\n```\n\n#### Generate Multiple Formats\n\n```python\nrepo = Repository(\"./my-project\").analyze()\nmarkdown = repo.to_markdown()\njson_data = repo.to_json()\nhtml_doc = repo.to_html()\n\n# Access raw data\ndata = repo.to_dict()\nprint(f\"Files: {data['metadata']['file_count']}\")\nprint(f\"Token usage: {data['metadata'].get('total_tokens', 0)}\")\nprint(f\"Compression achieved: {data['metadata'].get('compression_ratio', 1.0):.1%}\")\n```\n\n## \ud83c\udfaf **Summarization Features**\n\n### AST-Based Python Summarization\n\nsrc2md uses Abstract Syntax Tree (AST) analysis to intelligently summarize Python code while preserving structure:\n\n- **MINIMAL**: Only class/function signatures\n- **OUTLINE**: Signatures with structural hierarchy \n- **DOCSTRINGS**: Signatures plus documentation\n- **SIGNATURES**: Full signatures with type hints\n- **FULL**: No summarization\n\n### Multi-Language Support\n\nSpecialized summarizers for different file types:\n\n- **Python**: AST-based analysis with import/export preservation\n- **JavaScript/TypeScript**: Function and class extraction\n- **JSON/YAML**: Schema extraction with sample data\n- **Test Files**: Test name and assertion extraction\n- **Documentation**: Heading and key point extraction\n\n### Smart Truncation\n\nWhen files must be truncated to fit token limits:\n\n1. Preserves code structure (complete functions/classes)\n2. Maintains syntax validity\n3. Prioritizes public APIs over private methods\n4. Keeps imports and exports intact\n\n### LLM-Powered Summarization\n\nOptional integration with OpenAI and Anthropic for semantic compression:\n\n```bash\n# Set API keys\nexport OPENAI_API_KEY=\"your-openai-key\"\nexport ANTHROPIC_API_KEY=\"your-anthropic-key\"\n\n# Use LLM summarization\nsrc2md /path/to/project --use-llm --llm-model gpt-3.5-turbo\nsrc2md /path/to/project --use-llm --llm-model claude-3-haiku-20240307\n```\n\n## \ud83d\udcca **Output Formats**\n\n### JSON\n\nStructured data perfect for programmatic processing:\n\n```json\n{\n \"metadata\": {\n \"project_name\": \"my-project\",\n \"generated_at\": \"2025-01-01T12:00:00\",\n \"patterns\": {...}\n },\n \"statistics\": {\n \"total_files\": 42,\n \"languages\": {\"python\": {\"count\": 15, \"total_size\": 50000}},\n \"project_complexity\": 3.2\n },\n \"documentation\": [...],\n \"source_files\": [...]\n}\n```\n\n### JSONL\n\nOne JSON object per line - perfect for streaming and big data tools:\n\n```jsonl\n{\"type\": \"metadata\", \"data\": {...}}\n{\"type\": \"statistics\", \"data\": {...}}\n{\"type\": \"source_file\", \"data\": {...}}\n```\n\n### HTML\n\nBeautiful, styled documentation ready for the web with syntax highlighting and responsive design.\n\n### Markdown\n\nClean, readable documentation compatible with GitHub, GitLab, and other platforms.\n\n## \ud83d\udd27 **Advanced Options**\n\n### File Patterns\n\n```bash\n# Custom documentation patterns\nsrc2md project --doc-pat '*.md' '*.rst' '*.txt'\n\n# Specific source file types\nsrc2md project --src-pat '*.py' '*.js' '*.ts'\n\n# Ignore patterns\nsrc2md project --ignore-pat '*.pyc' 'node_modules/' '.git/'\n```\n\n### Ignore Files\n\nCreate a `.src2mdignore` file in your project root:\n\n```gitignore\n# Dependencies\nnode_modules/\n__pycache__/\n*.pyc\n\n# Build outputs\ndist/\nbuild/\n*.egg-info/\n\n# IDE files\n.vscode/\n.idea/\n```\n\n### Configuration\n\n```bash\n# Use custom ignore file\nsrc2md project --ignore-file .gitignore\n\n# Disable statistics\nsrc2md project --no-stats\n\n# Metadata only (no file contents)\nsrc2md project --no-content\n```\n\n## \ud83c\udfaf **Use Cases**\n\n- **LLM Context**: Generate structured context for AI/ML models\n- **Documentation**: Create beautiful project documentation\n- **Code Analysis**: Extract metrics and statistics from codebases\n- **Data Export**: Convert code to structured formats for analysis\n- **Archive**: Create comprehensive snapshots of projects\n- **CI/CD**: Generate documentation automatically in build pipelines\n\n## \ud83d\udcc8 **Statistics & Metrics**\n\nsrc2md automatically generates:\n\n- **File Metrics**: Counts by type and language\n- **Code Complexity**: Cyclomatic complexity scores\n- **Token Usage**: Actual token counts for LLM context\n- **Compression Stats**: Before/after summarization metrics\n- **Importance Scores**: File prioritization rankings\n- **Language Breakdown**: Distribution of code by language\n- **Structure Analysis**: Dependency and module relationships\n\n## \ud83e\udd1d **Migration from v0.x**\n\nThe new version is backward compatible. Existing commands work unchanged:\n\n```bash\n# This still works exactly as before\nsrc2md project -o docs.md --doc-pat '*.md' --src-pat '*.py'\n```\n\nNew features are opt-in through additional flags and the Python API.\n\n## \ud83d\udcc4 **License**\n\nMIT License - see [LICENSE](LICENSE) file for details.\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2025 Alex Towell\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.\n ",
"summary": "Convert source code to structured, context-optimized markdown for LLMs with intelligent summarization",
"version": "2.1.0",
"project_urls": {
"Bug Tracker": "https://github.com/queelius/src2md/issues",
"Changelog": "https://github.com/queelius/src2md/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/queelius/src2md/blob/main/README.md",
"Homepage": "https://github.com/queelius/src2md"
},
"split_keywords": [
"markdown",
" source code",
" documentation",
" llm",
" context-window",
" code-analysis",
" summarization",
" ast",
" code-compression"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2a942cc2a8820f70628da4a038292142f8c244c0f4af94094b65cf45c5baa88c",
"md5": "071ef7960b2bc476a25edc6a886a0b92",
"sha256": "56c3cd2c6cbb0f3d1f26287319ec89a794be945ae224c7d6ab611093d1538c3b"
},
"downloads": -1,
"filename": "src2md-2.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "071ef7960b2bc476a25edc6a886a0b92",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 42632,
"upload_time": "2025-09-04T06:48:27",
"upload_time_iso_8601": "2025-09-04T06:48:27.404506Z",
"url": "https://files.pythonhosted.org/packages/2a/94/2cc2a8820f70628da4a038292142f8c244c0f4af94094b65cf45c5baa88c/src2md-2.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ad2cd9427f53c52ed0048671c0d0c6f7fafd4e287ec7f79feaf3dea62e2b9efe",
"md5": "3fdb8e25d899e4252b24f1001400fc64",
"sha256": "2205b4aa826ae5cf9a93e4e5a96b486aeec0c7fa21fdd56382895b36259565ca"
},
"downloads": -1,
"filename": "src2md-2.1.0.tar.gz",
"has_sig": false,
"md5_digest": "3fdb8e25d899e4252b24f1001400fc64",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 64956,
"upload_time": "2025-09-04T06:48:28",
"upload_time_iso_8601": "2025-09-04T06:48:28.774393Z",
"url": "https://files.pythonhosted.org/packages/ad/2c/d9427f53c52ed0048671c0d0c6f7fafd4e287ec7f79feaf3dea62e2b9efe/src2md-2.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-04 06:48:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "queelius",
"github_project": "src2md",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pypandoc",
"specs": []
},
{
"name": "pathspec",
"specs": []
},
{
"name": "rich",
"specs": []
},
{
"name": "tiktoken",
"specs": [
[
">=",
"0.5.0"
]
]
},
{
"name": "pyyaml",
"specs": [
[
">=",
"6.0"
]
]
},
{
"name": "astroid",
"specs": [
[
">=",
"3.0.0"
]
]
}
],
"lcname": "src2md"
}