Name | slimformers JSON |
Version |
1.4.6
JSON |
| download |
home_page | None |
Summary | Lightweight Optimization and Model Adaptation |
upload_time | 2025-08-01 15:36:17 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.7 |
license | MIT License Copyright © 2025 Caden Chen Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
transformers
llm
pruning
lora
model optimization
compression
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Slimformers
Slimformers is a lightweight Python framework for pruning and fine-tuning transformer models. It supports activation-based MLP (FFN) pruning, attention head pruning, low-rank adaptation (LoRA) without needing any manual layer specification.
# Features
- Prunes neurons based on average activations across multiple batches
- Prunes attention heads based on mean query activations
- Automatic FFN and gated FFN block discovery for common architectures (GPT-2, BERT, LLaMA)
- Safely rebuilds pruned `nn.Linear` and `Conv1D` layers
- LoRA fine-tuning with auto-inferred target modules
- Compatible with Hugging Face models and tokenizers
# Quick Start
## Basic Pruning
```python
from slimformers import Pruner
from transformers import AutoModel, AutoTokenizer
import torch
# Load your model
model = AutoModel.from_pretrained("bert-base-uncased")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# Create pruner
pruner = Pruner(model)
# Prepare your data (returns dict with input_ids, attention_mask, etc.)
dataloader = your_dataloader_here
# Prune 30% of neurons based on activation magnitudes
pruner.prune_all_mlp_layers(
dataloader=dataloader,
sparsity=0.3,
max_batches=10
)
```
## Prune Attention Heads
``` python
# Prune 40% of attention heads based on query activations
pruner.prune_attention_heads(
dataloader=dataloader,
sparsity=0.4,
max_batches=10
)
```
## LoRA Fine-tuning
``` python
from slimformers import lora_finetune
from peft import TaskType
# Fine-tune with LoRA after pruning
fine_tuned_model = lora_finetune(
model=model,
dataloader=train_dataloader,
epochs=3,
lr=1e-4,
device="cuda",
r=8,
alpha=16,
task_type=TaskType.TOKEN_CLS
)
```
## Custom Prune Strategy
``` python
def custom_neuron_selection(activations, sparsity):
"""Custom strategy: keep neurons with highest variance"""
if activations.dim() == 3:
variance = activations.var(dim=(0,1))
else:
variance = activations.var(dim=0)
total = variance.size(0)
k = int((1.0 - sparsity) * total)
return torch.topk(variance, k=k).indices, total
# Use custom strategy
pruner = Pruner(model, pruning_strategy=custom_neuron_selection)
```
## Pruning Report
After pruning, ```pruner.report()``` displays a summary of the compression results. This includes:
- Original and pruned parameters counts
- Percentage reduction model size
- CPU and GPU memory usage before and after pruning
- Peak GPU memory usage (if CUDA enabled)
### Example
Pruning was run on ```deepseek-ai/deepseek-coder-1.3b-base``` with 40% sparsity using a Lenovo ThinkPad T490 (Intel i5-8365U CPU, no GPU!):
- Original Parameters: ```1,346,471,936```
- Pruned Parameters: ```1,024,855,424```
- Total Reduction: ```321,616,512 (23.89%)```
- CPU Memory: ```(Before --> After): 5398.57 MB --> 4253.34 MB (–1145.23 MB)```
# Limitations
Slimformers is made to be lightweight and architecture agnostic, but there are current limitations:
- **Limited model support (for now)**
Currently, attention head and FFN pruning supports GPT‑2, BERT, and LLaMA type models. Encoder-decoder architectures like T5 or BART (with cross-attention), and other variants like Falcon or BLOOM, are not supported yet. Also, FFN pruning assumes standard `nn.Linear` or `Conv1D` layers. If your model uses custom MLP designs like SwiGLU, Gated FFNs, or fused blocks, you'll need to add custom discovery logic.
That said, **support for more models will be added over time**. The framework is modular, and the discovery system is easy to extend. Feel free to contribute or fork it to add support for other architectures. I will continue to expand the library's coverage.
- **Won’t work with exotic attention layouts**
If your model uses grouped heads, custom fused QKV projections, or MoE-style head routing, the default slicing logic might fail. This is rare for most Hugging Face models, but possible.
- **Not optimized for speed (Yet!)**
Raw data
{
"_id": null,
"home_page": null,
"name": "slimformers",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "transformers, LLM, pruning, LoRA, model optimization, compression",
"author": null,
"author_email": "Caden Chen <cadenc.woss@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/29/76/1a1cd1c1cb7d717fa920ae6ccca5d8786c1ff34ad5e7ae38914f33413800/slimformers-1.4.6.tar.gz",
"platform": null,
"description": "# Slimformers\r\n\r\nSlimformers is a lightweight Python framework for pruning and fine-tuning transformer models. It supports activation-based MLP (FFN) pruning, attention head pruning, low-rank adaptation (LoRA) without needing any manual layer specification.\r\n\r\n# Features\r\n\r\n- Prunes neurons based on average activations across multiple batches\r\n- Prunes attention heads based on mean query activations\r\n- Automatic FFN and gated FFN block discovery for common architectures (GPT-2, BERT, LLaMA)\r\n- Safely rebuilds pruned `nn.Linear` and `Conv1D` layers\r\n- LoRA fine-tuning with auto-inferred target modules\r\n- Compatible with Hugging Face models and tokenizers\r\n\r\n# Quick Start\r\n\r\n## Basic Pruning\r\n\r\n```python\r\nfrom slimformers import Pruner\r\nfrom transformers import AutoModel, AutoTokenizer\r\nimport torch\r\n\r\n# Load your model\r\nmodel = AutoModel.from_pretrained(\"bert-base-uncased\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\n# Create pruner\r\npruner = Pruner(model)\r\n\r\n# Prepare your data (returns dict with input_ids, attention_mask, etc.)\r\ndataloader = your_dataloader_here\r\n\r\n# Prune 30% of neurons based on activation magnitudes\r\npruner.prune_all_mlp_layers(\r\n dataloader=dataloader,\r\n sparsity=0.3,\r\n max_batches=10\r\n)\r\n```\r\n## Prune Attention Heads\r\n``` python\r\n# Prune 40% of attention heads based on query activations\r\npruner.prune_attention_heads(\r\n dataloader=dataloader,\r\n sparsity=0.4,\r\n max_batches=10\r\n)\r\n```\r\n\r\n## LoRA Fine-tuning\r\n``` python\r\nfrom slimformers import lora_finetune\r\nfrom peft import TaskType\r\n\r\n# Fine-tune with LoRA after pruning\r\nfine_tuned_model = lora_finetune(\r\n model=model,\r\n dataloader=train_dataloader,\r\n epochs=3,\r\n lr=1e-4,\r\n device=\"cuda\",\r\n r=8,\r\n alpha=16,\r\n task_type=TaskType.TOKEN_CLS\r\n)\r\n```\r\n## Custom Prune Strategy\r\n``` python\r\ndef custom_neuron_selection(activations, sparsity):\r\n \"\"\"Custom strategy: keep neurons with highest variance\"\"\"\r\n if activations.dim() == 3:\r\n variance = activations.var(dim=(0,1))\r\n else:\r\n variance = activations.var(dim=0)\r\n \r\n total = variance.size(0)\r\n k = int((1.0 - sparsity) * total)\r\n return torch.topk(variance, k=k).indices, total\r\n\r\n# Use custom strategy\r\npruner = Pruner(model, pruning_strategy=custom_neuron_selection)\r\n```\r\n## Pruning Report\r\n\r\nAfter pruning, ```pruner.report()``` displays a summary of the compression results. This includes:\r\n- Original and pruned parameters counts\r\n- Percentage reduction model size\r\n- CPU and GPU memory usage before and after pruning\r\n- Peak GPU memory usage (if CUDA enabled)\r\n\r\n### Example \r\n\r\nPruning was run on ```deepseek-ai/deepseek-coder-1.3b-base``` with 40% sparsity using a Lenovo ThinkPad T490 (Intel i5-8365U CPU, no GPU!): \r\n- Original Parameters: ```1,346,471,936```\r\n- Pruned Parameters: ```1,024,855,424```\r\n- Total Reduction: ```321,616,512 (23.89%)```\r\n- CPU Memory: ```(Before --> After): 5398.57 MB --> 4253.34 MB (\u20131145.23 MB)```\r\n\r\n# Limitations\r\n\r\nSlimformers is made to be lightweight and architecture agnostic, but there are current limitations:\r\n\r\n- **Limited model support (for now)** \r\n Currently, attention head and FFN pruning supports GPT\u20112, BERT, and LLaMA type models. Encoder-decoder architectures like T5 or BART (with cross-attention), and other variants like Falcon or BLOOM, are not supported yet. Also, FFN pruning assumes standard `nn.Linear` or `Conv1D` layers. If your model uses custom MLP designs like SwiGLU, Gated FFNs, or fused blocks, you'll need to add custom discovery logic.\r\n\r\n That said, **support for more models will be added over time**. The framework is modular, and the discovery system is easy to extend. Feel free to contribute or fork it to add support for other architectures. I will continue to expand the library's coverage.\r\n\r\n- **Won\u2019t work with exotic attention layouts** \r\n If your model uses grouped heads, custom fused QKV projections, or MoE-style head routing, the default slicing logic might fail. This is rare for most Hugging Face models, but possible.\r\n\r\n- **Not optimized for speed (Yet!)** \r\n",
"bugtrack_url": null,
"license": "MIT License Copyright \u00a9 2025 Caden Chen Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "Lightweight Optimization and Model Adaptation",
"version": "1.4.6",
"project_urls": {
"Homepage": "https://slimformers.vercel.app/",
"Source": "https://github.com/sakufish/slimformers/"
},
"split_keywords": [
"transformers",
" llm",
" pruning",
" lora",
" model optimization",
" compression"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0f546106ec3eaa121b0bf4e577947ad092d8cf0de1cd430578c94c0aae844db4",
"md5": "8a8519491a5fb26bc6409b5c4f37b2a2",
"sha256": "38690857e6dd66038507363545d7b8da1bf09eeec08e37fddd89f4b31cf8e395"
},
"downloads": -1,
"filename": "slimformers-1.4.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8a8519491a5fb26bc6409b5c4f37b2a2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 12355,
"upload_time": "2025-08-01T15:36:16",
"upload_time_iso_8601": "2025-08-01T15:36:16.640714Z",
"url": "https://files.pythonhosted.org/packages/0f/54/6106ec3eaa121b0bf4e577947ad092d8cf0de1cd430578c94c0aae844db4/slimformers-1.4.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "29761a1cd1c1cb7d717fa920ae6ccca5d8786c1ff34ad5e7ae38914f33413800",
"md5": "fd6bbc9a9cc62120dab4f8174e14db2b",
"sha256": "37e7fd26d7b80b267c89ef000ac5d2a191930d4af3b5121428c077cdd9dd25c8"
},
"downloads": -1,
"filename": "slimformers-1.4.6.tar.gz",
"has_sig": false,
"md5_digest": "fd6bbc9a9cc62120dab4f8174e14db2b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 14653,
"upload_time": "2025-08-01T15:36:17",
"upload_time_iso_8601": "2025-08-01T15:36:17.781172Z",
"url": "https://files.pythonhosted.org/packages/29/76/1a1cd1c1cb7d717fa920ae6ccca5d8786c1ff34ad5e7ae38914f33413800/slimformers-1.4.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-01 15:36:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sakufish",
"github_project": "slimformers",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "slimformers"
}