# Model VRAM Calculator
A Python CLI tool for estimating GPU memory requirements for Hugging Face models with different data types and parallelization strategies.
## Features
- 🔍 Automatically fetch model configurations from Hugging Face
- 📊 Support multiple data types: fp32, fp16/bf16, fp8, int8, int4, mxfp4, nvfp4
- 🎯 Memory estimation for different scenarios:
- **Inference**: Model weights + KV cache overhead
- **Training**: Including gradients and optimizer states (Adam)
- **LoRA Fine-tuning**: Low-rank adaptation fine-tuning memory requirements
- ⚡ Calculate memory distribution across parallelization strategies:
- Tensor Parallelism (TP): 1, 2, 4, 8
- Pipeline Parallelism (PP): 1, 2, 4, 8
- Expert Parallelism (EP)
- Data Parallelism (DP)
- Combined strategies (TP + PP)
- 🎮 GPU compatibility checks:
- Common GPU type recommendations (RTX 4090, A100, H100, etc.)
- Minimum GPU memory requirement calculations
- 📈 Professional table output using Rich library:
- 🎨 Color coding and beautiful borders
- 📊 Progress bars and status displays
- 🚀 Modern CLI interface experience
- 🔧 Customizable parameters: LoRA rank, batch size, sequence length
## Installation
```bash
pip3 install -r requirements.txt
```
> Main dependencies: `requests` and `rich` (for beautiful tables and progress display)
## Usage
### Basic Usage
```bash
python3 vram_calculator.py microsoft/DialoGPT-medium
```
### Specify Data Type
```bash
python3 vram_calculator.py meta-llama/Llama-2-7b-hf --dtype bf16
```
### Custom Batch Size and Sequence Length
```bash
python3 vram_calculator.py mistralai/Mistral-7B-v0.1 --batch-size 4 --sequence-length 4096
```
### Show Detailed Parallelization Strategies and GPU Recommendations
```bash
python3 vram_calculator.py --show-detailed microsoft/DialoGPT-medium
```
### Custom LoRA Rank for Fine-tuning Memory Estimation
```bash
python3 vram_calculator.py --lora-rank 128 --show-detailed microsoft/DialoGPT-medium
```
### View Available Data Types and GPU Models
```bash
python3 vram_calculator.py --list-types
```
### Use Custom Configuration
```bash
# Use custom configuration directory
python3 vram_calculator.py --config-dir ./my_config microsoft/DialoGPT-medium
```
## Command Line Arguments
- `model_name`: Hugging Face model name (required)
- `--dtype`: Specify data type (optional, default: show all types)
- `--batch-size`: Batch size for activation memory estimation (default: 1)
- `--sequence-length`: Sequence length for activation memory estimation (default: 2048)
- `--lora-rank`: LoRA rank parameter for fine-tuning (default: 64)
- `--show-detailed`: Show detailed parallelization strategies and GPU recommendations
- `--config-dir`: Specify custom configuration directory
- `--list-types`: List all available data types and GPU models
## Configuration System
The tool uses separate JSON configuration files to manage data types and GPU specifications, allowing flexible user customization:
### Configuration File Structure
- **`data_types.json`** - Define data types and bytes per parameter
- **`gpu_types.json`** - Define GPU models and memory specifications
- **`display_settings.json`** - Control display styles and behavior
### Adding Custom Data Types
Edit the `data_types.json` file:
```json
{
"your_custom_format": {
"bytes_per_param": 0.75,
"description": "Your custom 6-bit format"
}
}
```
### Adding Custom GPU Models
Edit the `gpu_types.json` file:
```json
{
"name": "RTX 5090",
"memory_gb": 32,
"category": "consumer",
"architecture": "Blackwell"
}
```
For detailed configuration instructions, please refer to: [CONFIG_GUIDE.md](CONFIG_GUIDE.md)
## Supported Data Types
| Data Type | Bytes per Parameter | Description |
|-----------|--------------------|-----------|
| fp32 | 4 | 32-bit floating point |
| fp16 | 2 | 16-bit floating point |
| bf16 | 2 | Brain Float 16 |
| fp8 | 1 | 8-bit floating point |
| int8 | 1 | 8-bit integer |
| int4 | 0.5 | 4-bit integer |
| mxfp4 | 0.5 | Microsoft FP4 |
| nvfp4 | 0.5 | NVIDIA FP4 |
## Parallelization Strategies
### Tensor Parallelism (TP)
Splits model weights by tensor dimensions across multiple GPUs.
### Pipeline Parallelism (PP)
Distributes different model layers to different GPUs.
### Expert Parallelism (EP)
For MoE (Mixture of Experts) models, distributes expert networks to different GPUs.
### Data Parallelism (DP)
Each GPU holds a complete model copy, only splitting data.
## Example Output
### Basic Output (Default Mode)
```
================================================================================
Model: microsoft/DialoGPT-medium
Architecture: gpt2
Parameters: 404,966,400
================================================================================
Memory Requirements by Data Type and Scenario:
================================================================================
Data Type Total Size Inference Training LoRA
(GB) (GB) (GB) (Adam) (GB) (GB)
────────────────────────────────────────────────────────────────────────────────
FP32 1.51 1.81 7.84 1.84
FP16 0.75 0.91 3.92 0.94
BF16 0.75 0.91 3.92 0.94
INT8 0.38 0.45 1.96 0.48
INT4 0.19 0.23 0.98 0.26
```
### Detailed Output (--show-detailed mode)
```
================================================================================
Model: microsoft/DialoGPT-medium
Architecture: gpt2
Parameters: 404,966,400
================================================================================
Memory Requirements by Data Type and Scenario:
================================================================================
Data Type Total Size Inference Training LoRA
(GB) (GB) (GB) (Adam) (GB) (GB)
────────────────────────────────────────────────────────────────────────────────
FP32 1.51 1.81 7.84 1.84
FP16 0.75 0.91 3.92 0.94
BF16 0.75 0.91 3.92 0.94
INT8 0.38 0.45 1.96 0.48
INT4 0.19 0.23 0.98 0.26
Parallelization Strategies (BF16 Inference):
================================================================================
Strategy TP PP EP DP Memory/GPU (GB) Min GPUs
────────────────────────────────────────────────────────────────────────────────
Single GPU 1 1 1 1 0.91 4GB+
Tensor Parallel 2 1 1 1 0.45 4GB+
Tensor Parallel 4 1 1 1 0.23 4GB+
Tensor Parallel 8 1 1 1 0.11 4GB+
Pipeline Parallel 1 2 1 1 0.45 4GB+
Pipeline Parallel 1 4 1 1 0.23 4GB+
Pipeline Parallel 1 8 1 1 0.11 4GB+
TP + PP 2 2 1 1 0.23 4GB+
TP + PP 2 4 1 1 0.11 4GB+
TP + PP 4 2 1 1 0.11 4GB+
TP + PP 4 4 1 1 0.06 4GB+
Recommendations:
================================================================================
GPU Type Memory Inference Training LoRA
────────────────────────────────────────────────────────────────────────────────
RTX 4090 24 GB ✓ ✓ ✓
A100 40GB 40 GB ✓ ✓ ✓
A100 80GB 80 GB ✓ ✓ ✓
H100 80 GB ✓ ✓ ✓
Minimum GPU Requirements:
────────────────────────────────────────────────────────────────────────────────
Single GPU Inference: 0.9GB
Single GPU Training: 3.9GB
Single GPU LoRA: 0.9GB
```
## Calculation Formulas
### Inference Memory
```
Inference Memory = Model Weights × 1.2
```
Includes model weights and KV cache overhead.
### Training Memory (with Adam)
```
Training Memory = Model Weights × 4 × 1.3
```
- 4x factor: Model weights (1x) + Gradients (1x) + Adam optimizer states (2x)
- 1.3x factor: 30% additional overhead (activation caching, etc.)
### LoRA Fine-tuning Memory
```
LoRA Memory = (Model Weights + LoRA Parameter Overhead) × 1.2
```
LoRA parameter overhead calculated based on rank and target module ratio.
## Notes
1. **Activation Memory**: Current simplified estimation may be significantly reduced in practice due to optimization strategies (such as gradient checkpointing)
2. **Parallelization Efficiency**: Assumes ideal conditions, actual may vary slightly due to communication overhead
3. **LoRA Estimation**: Based on typical configurations (25% target modules), actual may vary depending on specific implementation
4. **Mixed Data Types**: Some cases may use mixed precision, actual memory between different data types
5. **Model Architecture Differences**: Different architectures (such as MoE) may have special memory distribution patterns
## Supported Model Architectures
Currently mainly supports Transformer architecture models, including but not limited to:
- GPT series
- LLaMA series
- Mistral series
- BERT series
- T5 series
## Contributing
Welcome to submit Issues and Pull Requests to improve this tool!
Raw data
{
"_id": null,
"home_page": null,
"name": "model-vram-calc",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, calculator, gpu, huggingface, memory, ml, transformer, vram",
"author": null,
"author_email": "HF VRAM Calculator Contributors <hf-vram-calc@example.com>",
"download_url": "https://files.pythonhosted.org/packages/a6/59/54f5eb2ba5e71ec6d13fcdfa0da2b67abb88630712c5d4456464615a78bf/model_vram_calc-1.0.0.tar.gz",
"platform": null,
"description": "# Model VRAM Calculator\n\nA Python CLI tool for estimating GPU memory requirements for Hugging Face models with different data types and parallelization strategies.\n\n## Features\n\n- \ud83d\udd0d Automatically fetch model configurations from Hugging Face\n- \ud83d\udcca Support multiple data types: fp32, fp16/bf16, fp8, int8, int4, mxfp4, nvfp4\n- \ud83c\udfaf Memory estimation for different scenarios:\n - **Inference**: Model weights + KV cache overhead\n - **Training**: Including gradients and optimizer states (Adam)\n - **LoRA Fine-tuning**: Low-rank adaptation fine-tuning memory requirements\n- \u26a1 Calculate memory distribution across parallelization strategies:\n - Tensor Parallelism (TP): 1, 2, 4, 8\n - Pipeline Parallelism (PP): 1, 2, 4, 8\n - Expert Parallelism (EP)\n - Data Parallelism (DP)\n - Combined strategies (TP + PP)\n- \ud83c\udfae GPU compatibility checks:\n - Common GPU type recommendations (RTX 4090, A100, H100, etc.)\n - Minimum GPU memory requirement calculations\n- \ud83d\udcc8 Professional table output using Rich library:\n - \ud83c\udfa8 Color coding and beautiful borders\n - \ud83d\udcca Progress bars and status displays\n - \ud83d\ude80 Modern CLI interface experience\n- \ud83d\udd27 Customizable parameters: LoRA rank, batch size, sequence length\n\n## Installation\n\n```bash\npip3 install -r requirements.txt\n```\n\n> Main dependencies: `requests` and `rich` (for beautiful tables and progress display)\n\n## Usage\n\n### Basic Usage\n\n```bash\npython3 vram_calculator.py microsoft/DialoGPT-medium\n```\n\n### Specify Data Type\n\n```bash\npython3 vram_calculator.py meta-llama/Llama-2-7b-hf --dtype bf16\n```\n\n### Custom Batch Size and Sequence Length\n\n```bash\npython3 vram_calculator.py mistralai/Mistral-7B-v0.1 --batch-size 4 --sequence-length 4096\n```\n\n### Show Detailed Parallelization Strategies and GPU Recommendations\n\n```bash\npython3 vram_calculator.py --show-detailed microsoft/DialoGPT-medium\n```\n\n### Custom LoRA Rank for Fine-tuning Memory Estimation\n\n```bash\npython3 vram_calculator.py --lora-rank 128 --show-detailed microsoft/DialoGPT-medium\n```\n\n### View Available Data Types and GPU Models\n\n```bash\npython3 vram_calculator.py --list-types\n```\n\n### Use Custom Configuration\n\n```bash\n# Use custom configuration directory\npython3 vram_calculator.py --config-dir ./my_config microsoft/DialoGPT-medium\n```\n\n## Command Line Arguments\n\n- `model_name`: Hugging Face model name (required)\n- `--dtype`: Specify data type (optional, default: show all types)\n- `--batch-size`: Batch size for activation memory estimation (default: 1)\n- `--sequence-length`: Sequence length for activation memory estimation (default: 2048)\n- `--lora-rank`: LoRA rank parameter for fine-tuning (default: 64)\n- `--show-detailed`: Show detailed parallelization strategies and GPU recommendations\n- `--config-dir`: Specify custom configuration directory\n- `--list-types`: List all available data types and GPU models\n\n## Configuration System\n\nThe tool uses separate JSON configuration files to manage data types and GPU specifications, allowing flexible user customization:\n\n### Configuration File Structure\n\n- **`data_types.json`** - Define data types and bytes per parameter\n- **`gpu_types.json`** - Define GPU models and memory specifications \n- **`display_settings.json`** - Control display styles and behavior\n\n### Adding Custom Data Types\n\nEdit the `data_types.json` file:\n\n```json\n{\n \"your_custom_format\": {\n \"bytes_per_param\": 0.75,\n \"description\": \"Your custom 6-bit format\"\n }\n}\n```\n\n### Adding Custom GPU Models\n\nEdit the `gpu_types.json` file:\n\n```json\n{\n \"name\": \"RTX 5090\",\n \"memory_gb\": 32,\n \"category\": \"consumer\",\n \"architecture\": \"Blackwell\"\n}\n```\n\nFor detailed configuration instructions, please refer to: [CONFIG_GUIDE.md](CONFIG_GUIDE.md)\n\n## Supported Data Types\n\n| Data Type | Bytes per Parameter | Description |\n|-----------|--------------------|-----------| \n| fp32 | 4 | 32-bit floating point |\n| fp16 | 2 | 16-bit floating point |\n| bf16 | 2 | Brain Float 16 |\n| fp8 | 1 | 8-bit floating point |\n| int8 | 1 | 8-bit integer |\n| int4 | 0.5 | 4-bit integer |\n| mxfp4 | 0.5 | Microsoft FP4 |\n| nvfp4 | 0.5 | NVIDIA FP4 |\n\n## Parallelization Strategies\n\n### Tensor Parallelism (TP)\nSplits model weights by tensor dimensions across multiple GPUs.\n\n### Pipeline Parallelism (PP)\nDistributes different model layers to different GPUs.\n\n### Expert Parallelism (EP)\nFor MoE (Mixture of Experts) models, distributes expert networks to different GPUs.\n\n### Data Parallelism (DP)\nEach GPU holds a complete model copy, only splitting data.\n\n## Example Output\n\n### Basic Output (Default Mode)\n\n```\n================================================================================\nModel: microsoft/DialoGPT-medium\nArchitecture: gpt2\nParameters: 404,966,400\n================================================================================\n\nMemory Requirements by Data Type and Scenario: \n================================================================================\nData Type Total Size Inference Training LoRA \n(GB) (GB) (GB) (Adam) (GB) (GB) \n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nFP32 1.51 1.81 7.84 1.84 \nFP16 0.75 0.91 3.92 0.94 \nBF16 0.75 0.91 3.92 0.94 \nINT8 0.38 0.45 1.96 0.48 \nINT4 0.19 0.23 0.98 0.26 \n```\n\n### Detailed Output (--show-detailed mode)\n\n```\n================================================================================\nModel: microsoft/DialoGPT-medium\nArchitecture: gpt2\nParameters: 404,966,400\n================================================================================\n\nMemory Requirements by Data Type and Scenario: \n================================================================================\nData Type Total Size Inference Training LoRA \n(GB) (GB) (GB) (Adam) (GB) (GB) \n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nFP32 1.51 1.81 7.84 1.84 \nFP16 0.75 0.91 3.92 0.94 \nBF16 0.75 0.91 3.92 0.94 \nINT8 0.38 0.45 1.96 0.48 \nINT4 0.19 0.23 0.98 0.26 \n\nParallelization Strategies (BF16 Inference): \n================================================================================\nStrategy TP PP EP DP Memory/GPU (GB) Min GPUs \n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nSingle GPU 1 1 1 1 0.91 4GB+ \nTensor Parallel 2 1 1 1 0.45 4GB+ \nTensor Parallel 4 1 1 1 0.23 4GB+ \nTensor Parallel 8 1 1 1 0.11 4GB+ \nPipeline Parallel 1 2 1 1 0.45 4GB+ \nPipeline Parallel 1 4 1 1 0.23 4GB+ \nPipeline Parallel 1 8 1 1 0.11 4GB+ \nTP + PP 2 2 1 1 0.23 4GB+ \nTP + PP 2 4 1 1 0.11 4GB+ \nTP + PP 4 2 1 1 0.11 4GB+ \nTP + PP 4 4 1 1 0.06 4GB+ \n\nRecommendations: \n================================================================================\nGPU Type Memory Inference Training LoRA \n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nRTX 4090 24 GB \u2713 \u2713 \u2713 \nA100 40GB 40 GB \u2713 \u2713 \u2713 \nA100 80GB 80 GB \u2713 \u2713 \u2713 \nH100 80 GB \u2713 \u2713 \u2713 \n\nMinimum GPU Requirements: \n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nSingle GPU Inference: 0.9GB\nSingle GPU Training: 3.9GB\nSingle GPU LoRA: 0.9GB\n```\n\n## Calculation Formulas\n\n### Inference Memory\n```\nInference Memory = Model Weights \u00d7 1.2\n```\nIncludes model weights and KV cache overhead.\n\n### Training Memory (with Adam)\n```\nTraining Memory = Model Weights \u00d7 4 \u00d7 1.3\n```\n- 4x factor: Model weights (1x) + Gradients (1x) + Adam optimizer states (2x)\n- 1.3x factor: 30% additional overhead (activation caching, etc.)\n\n### LoRA Fine-tuning Memory\n```\nLoRA Memory = (Model Weights + LoRA Parameter Overhead) \u00d7 1.2\n```\nLoRA parameter overhead calculated based on rank and target module ratio.\n\n## Notes\n\n1. **Activation Memory**: Current simplified estimation may be significantly reduced in practice due to optimization strategies (such as gradient checkpointing)\n2. **Parallelization Efficiency**: Assumes ideal conditions, actual may vary slightly due to communication overhead\n3. **LoRA Estimation**: Based on typical configurations (25% target modules), actual may vary depending on specific implementation\n4. **Mixed Data Types**: Some cases may use mixed precision, actual memory between different data types\n5. **Model Architecture Differences**: Different architectures (such as MoE) may have special memory distribution patterns\n\n## Supported Model Architectures\n\nCurrently mainly supports Transformer architecture models, including but not limited to:\n- GPT series\n- LLaMA series\n- Mistral series\n- BERT series\n- T5 series\n\n## Contributing\n\nWelcome to submit Issues and Pull Requests to improve this tool!",
"bugtrack_url": null,
"license": "MIT",
"summary": "GPU memory calculator for Hugging Face models with different data types and parallelization strategies",
"version": "1.0.0",
"project_urls": {
"Homepage": "https://github.com/example/hf-vram-calc",
"Issues": "https://github.com/example/hf-vram-calc/issues",
"Repository": "https://github.com/example/hf-vram-calc"
},
"split_keywords": [
"ai",
" calculator",
" gpu",
" huggingface",
" memory",
" ml",
" transformer",
" vram"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "6cdc18db49c42849017fc3efd8ad5cd01be10a71c087f2c579a99698d600bd23",
"md5": "32dcb70c213c9ed149228ccf65b3b556",
"sha256": "4b7dae838e7d4d11a1f1d89dac55e7431be3e5582ca684f42985312ae406136c"
},
"downloads": -1,
"filename": "model_vram_calc-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "32dcb70c213c9ed149228ccf65b3b556",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 16528,
"upload_time": "2025-08-15T13:27:16",
"upload_time_iso_8601": "2025-08-15T13:27:16.960019Z",
"url": "https://files.pythonhosted.org/packages/6c/dc/18db49c42849017fc3efd8ad5cd01be10a71c087f2c579a99698d600bd23/model_vram_calc-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a65954f5eb2ba5e71ec6d13fcdfa0da2b67abb88630712c5d4456464615a78bf",
"md5": "9ab40291baed1385de89eea3b4c2753c",
"sha256": "bc04eba604d62f27b5c9c05070187cd079cf943a4f2d0149a960fab7739941af"
},
"downloads": -1,
"filename": "model_vram_calc-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "9ab40291baed1385de89eea3b4c2753c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 16261,
"upload_time": "2025-08-15T13:27:18",
"upload_time_iso_8601": "2025-08-15T13:27:18.887390Z",
"url": "https://files.pythonhosted.org/packages/a6/59/54f5eb2ba5e71ec6d13fcdfa0da2b67abb88630712c5d4456464615a78bf/model_vram_calc-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-15 13:27:18",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "example",
"github_project": "hf-vram-calc",
"github_not_found": true,
"lcname": "model-vram-calc"
}