Automatic-Hook


NameAutomatic-Hook JSON
Version 0.2.0 PyPI version JSON
download
home_pagehttps://github.com/HP2706/Automatic_Hook.git
Summarymake any model compatible with transformer_lens
upload_time2024-07-16 21:03:35
maintainerNone
docs_urlNone
authorHP
requires_python<4.0,>=3.11
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Automatic_Hook

AutoHooked is a Python library that makes it possible to use arbitrary models in transformer_lens. 
This happens via an auto_hook function that wraps your pytorch model and applies hookpoint for every major 

## Features

- Works with both `nn.Module` and `nn.Parameter` operations
- Can be use both as a class decorator or on an already instantiated model 

## Installation

```bash
pip install Automatic_Hook
```

## Usage

###Usage as decorator

```python
from Automatic_Hook import auto_hook
import torch.nn as nn

@auto_hook
class MyModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(10, 10)
        #self.fc1_hook_point = HookPoint() NOW NOT NEEDED

    def forward(self, x):
        # self.fc1_hook_point(self.fc1(x)) NOW NOT NEEDED
        return self.fc1(x)

model = MyModel()
print(model.hook_dict.items())  # dict_items([('hook_point', HookPoint()), ('fc1.hook_point', HookPoint())])
```

### Wrap an instance

AutoHooked can also work with models that use `nn.Parameter`, such as this AutoEncoder example:

```python
from Automatic_Hook import auto_hook
import torch
from torch import nn

# taken from neel nandas excellent autoencoder tutorial: https://colab.research.google.com/drive/1u8larhpxy8w4mMsJiSBddNOzFGj7_RTn#scrollTo=MYrIYDEfBtbL
class AutoEncoder(nn.Module):
    def __init__(self, cfg):
        super().__init__()
        d_hidden = cfg["d_mlp"] * cfg["dict_mult"]
        d_mlp = cfg["d_mlp"]
        dtype = torch.float32
        torch.manual_seed(cfg["seed"])
        self.W_enc = nn.Parameter(
            torch.nn.init.kaiming_uniform_(
                torch.empty(d_mlp, d_hidden, dtype=dtype)))
        self.W_dec = nn.Parameter(
            torch.nn.init.kaiming_uniform_(
                torch.empty(d_hidden, d_mlp, dtype=dtype)))
        self.b_enc = nn.Parameter(
            torch.zeros(d_hidden, dtype=dtype)
        )
        self.b_dec = nn.Parameter(
            torch.zeros(d_mlp, dtype=dtype)
        )

    def forward(self, x):
        x_cent = x - self.b_dec
        acts = torch.relu(x_cent @ self.W_enc + self.b_enc)
        x_reconstruct = acts @ self.W_dec + self.b_dec
        return x_reconstruct

autoencoder = auto_hook(AutoEncoder({"d_mlp": 10, "dict_mult": 10, "l1_coeff": 10, "seed": 1}))
print(autoencoder.hook_dict.items())
# dict_items([('hook_point', HookPoint()), ('W_enc.hook_point', HookPoint()), ('W_dec.hook_point', HookPoint()), ('b_enc.hook_point', HookPoint()), ('b_dec.hook_point', HookPoint())])
```

If this was to be done manually the code would be way less clean:

```python
class AutoEncoder(nn.Module):
    def __init__(self, cfg):
        super().__init__()
        d_hidden = cfg['d_mlp'] * cfg['dict_mult']
        d_mlp = cfg['d_mlp']
        dtype = torch.float32
        torch.manual_seed(cfg['seed'])
        self.W_enc = nn.Parameter(
            torch.nn.init.kaiming_uniform_(
                torch.empty(d_mlp, d_hidden, dtype=dtype)
            )
        )
        self.W_enc_hook_point = HookPoint()
        self.W_dec = nn.Parameter(
            torch.nn.init.kaiming_uniform_(
                torch.empty(d_hidden, d_mlp, dtype=dtype)
            )
        )
        self.W_dec_hook_point = HookPoint()
        self.b_enc = nn.Parameter(
            torch.zeros(d_hidden, dtype=dtype)
        )
        self.b_enc_hook_point = HookPoint()
        self.b_dec = nn.Parameter(
            torch.zeros(d_mlp, dtype=dtype)
        )
        self.b_dec_hook_point = HookPoint()

    def forward(self, x):
        x_cent = self.b_dec_hook_point(x - self.b_dec)
        acts = torch.relu(self.b_enc_hook_point(self.W_enc_hook_point(x_cent @ self.W_enc) + self.b_enc))
        x_reconstruct = self.b_dec_hook_point(self.W_dec_hook_point(acts @ self.W_dec) + self.b_dec)
        return x_reconstruct
```

## Note 

There might be edge cases not supported for some weird reason so a function 'check_auto_hook' is provided to run the model class on all internal tests.

Note however that these might not always be informative, but can give hints/indications.

```python
from Automatic_Hook import check_auto_hook
hooked_model = auto_hook(model)
input_kwargs = {'x': torch.randn(10, 10)}
init_kwargs = {'cfg': {'d_mlp': 10, 'dict_mult': 10, 'l1_coeff': 10, 'seed': 1}}
check_auto_hook(AutoEncoder, input_kwargs, init_kwargs)
```

if strict is set to True a runtime error will be raised if the tests fail else 
a warning.

## Backward(bwd) Hook

Some trouble might occur this is specifcally when a model or its inner-components returns a non-tensor object which is then passed to a hook. I am working on how to resolve this. However this would still work if those hooks are just disabled.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/HP2706/Automatic_Hook.git",
    "name": "Automatic-Hook",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.11",
    "maintainer_email": null,
    "keywords": null,
    "author": "HP",
    "author_email": "hprjdk@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/40/e3/18e353b33b5f4d46b8fbcedbf7af8d1f34f2f492a10acacde249da94c1e3/automatic_hook-0.2.0.tar.gz",
    "platform": null,
    "description": "# Automatic_Hook\n\nAutoHooked is a Python library that makes it possible to use arbitrary models in transformer_lens. \nThis happens via an auto_hook function that wraps your pytorch model and applies hookpoint for every major \n\n## Features\n\n- Works with both `nn.Module` and `nn.Parameter` operations\n- Can be use both as a class decorator or on an already instantiated model \n\n## Installation\n\n```bash\npip install Automatic_Hook\n```\n\n## Usage\n\n###Usage as decorator\n\n```python\nfrom Automatic_Hook import auto_hook\nimport torch.nn as nn\n\n@auto_hook\nclass MyModel(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.fc1 = nn.Linear(10, 10)\n        #self.fc1_hook_point = HookPoint() NOW NOT NEEDED\n\n    def forward(self, x):\n        # self.fc1_hook_point(self.fc1(x)) NOW NOT NEEDED\n        return self.fc1(x)\n\nmodel = MyModel()\nprint(model.hook_dict.items())  # dict_items([('hook_point', HookPoint()), ('fc1.hook_point', HookPoint())])\n```\n\n### Wrap an instance\n\nAutoHooked can also work with models that use `nn.Parameter`, such as this AutoEncoder example:\n\n```python\nfrom Automatic_Hook import auto_hook\nimport torch\nfrom torch import nn\n\n# taken from neel nandas excellent autoencoder tutorial: https://colab.research.google.com/drive/1u8larhpxy8w4mMsJiSBddNOzFGj7_RTn#scrollTo=MYrIYDEfBtbL\nclass AutoEncoder(nn.Module):\n    def __init__(self, cfg):\n        super().__init__()\n        d_hidden = cfg[\"d_mlp\"] * cfg[\"dict_mult\"]\n        d_mlp = cfg[\"d_mlp\"]\n        dtype = torch.float32\n        torch.manual_seed(cfg[\"seed\"])\n        self.W_enc = nn.Parameter(\n            torch.nn.init.kaiming_uniform_(\n                torch.empty(d_mlp, d_hidden, dtype=dtype)))\n        self.W_dec = nn.Parameter(\n            torch.nn.init.kaiming_uniform_(\n                torch.empty(d_hidden, d_mlp, dtype=dtype)))\n        self.b_enc = nn.Parameter(\n            torch.zeros(d_hidden, dtype=dtype)\n        )\n        self.b_dec = nn.Parameter(\n            torch.zeros(d_mlp, dtype=dtype)\n        )\n\n    def forward(self, x):\n        x_cent = x - self.b_dec\n        acts = torch.relu(x_cent @ self.W_enc + self.b_enc)\n        x_reconstruct = acts @ self.W_dec + self.b_dec\n        return x_reconstruct\n\nautoencoder = auto_hook(AutoEncoder({\"d_mlp\": 10, \"dict_mult\": 10, \"l1_coeff\": 10, \"seed\": 1}))\nprint(autoencoder.hook_dict.items())\n# dict_items([('hook_point', HookPoint()), ('W_enc.hook_point', HookPoint()), ('W_dec.hook_point', HookPoint()), ('b_enc.hook_point', HookPoint()), ('b_dec.hook_point', HookPoint())])\n```\n\nIf this was to be done manually the code would be way less clean:\n\n```python\nclass AutoEncoder(nn.Module):\n    def __init__(self, cfg):\n        super().__init__()\n        d_hidden = cfg['d_mlp'] * cfg['dict_mult']\n        d_mlp = cfg['d_mlp']\n        dtype = torch.float32\n        torch.manual_seed(cfg['seed'])\n        self.W_enc = nn.Parameter(\n            torch.nn.init.kaiming_uniform_(\n                torch.empty(d_mlp, d_hidden, dtype=dtype)\n            )\n        )\n        self.W_enc_hook_point = HookPoint()\n        self.W_dec = nn.Parameter(\n            torch.nn.init.kaiming_uniform_(\n                torch.empty(d_hidden, d_mlp, dtype=dtype)\n            )\n        )\n        self.W_dec_hook_point = HookPoint()\n        self.b_enc = nn.Parameter(\n            torch.zeros(d_hidden, dtype=dtype)\n        )\n        self.b_enc_hook_point = HookPoint()\n        self.b_dec = nn.Parameter(\n            torch.zeros(d_mlp, dtype=dtype)\n        )\n        self.b_dec_hook_point = HookPoint()\n\n    def forward(self, x):\n        x_cent = self.b_dec_hook_point(x - self.b_dec)\n        acts = torch.relu(self.b_enc_hook_point(self.W_enc_hook_point(x_cent @ self.W_enc) + self.b_enc))\n        x_reconstruct = self.b_dec_hook_point(self.W_dec_hook_point(acts @ self.W_dec) + self.b_dec)\n        return x_reconstruct\n```\n\n## Note \n\nThere might be edge cases not supported for some weird reason so a function 'check_auto_hook' is provided to run the model class on all internal tests.\n\nNote however that these might not always be informative, but can give hints/indications.\n\n```python\nfrom Automatic_Hook import check_auto_hook\nhooked_model = auto_hook(model)\ninput_kwargs = {'x': torch.randn(10, 10)}\ninit_kwargs = {'cfg': {'d_mlp': 10, 'dict_mult': 10, 'l1_coeff': 10, 'seed': 1}}\ncheck_auto_hook(AutoEncoder, input_kwargs, init_kwargs)\n```\n\nif strict is set to True a runtime error will be raised if the tests fail else \na warning.\n\n## Backward(bwd) Hook\n\nSome trouble might occur this is specifcally when a model or its inner-components returns a non-tensor object which is then passed to a hook. I am working on how to resolve this. However this would still work if those hooks are just disabled.\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "make any model compatible with transformer_lens",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/HP2706/Automatic_Hook.git",
        "Repository": "https://github.com/HP2706/Automatic_Hook.git"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6611b8c6e17b0974c6604d5996a2345720d810776c7c57f080b27c1eb55f990f",
                "md5": "ec634d05bffe7f3394901d17377e9eca",
                "sha256": "9fc94e62a38adc77df9d8216ba3836702fc7c05c5502ebad13f14333b67f7492"
            },
            "downloads": -1,
            "filename": "automatic_hook-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ec634d05bffe7f3394901d17377e9eca",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.11",
            "size": 7307,
            "upload_time": "2024-07-16T21:03:33",
            "upload_time_iso_8601": "2024-07-16T21:03:33.760928Z",
            "url": "https://files.pythonhosted.org/packages/66/11/b8c6e17b0974c6604d5996a2345720d810776c7c57f080b27c1eb55f990f/automatic_hook-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "40e318e353b33b5f4d46b8fbcedbf7af8d1f34f2f492a10acacde249da94c1e3",
                "md5": "1443d4c976aeb66ba4a027639cbd7853",
                "sha256": "a976c875ff75b88b5f8e77d9f9a605b89807bf98c6b60018a91c426465dab22d"
            },
            "downloads": -1,
            "filename": "automatic_hook-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1443d4c976aeb66ba4a027639cbd7853",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.11",
            "size": 6296,
            "upload_time": "2024-07-16T21:03:35",
            "upload_time_iso_8601": "2024-07-16T21:03:35.701555Z",
            "url": "https://files.pythonhosted.org/packages/40/e3/18e353b33b5f4d46b8fbcedbf7af8d1f34f2f492a10acacde249da94c1e3/automatic_hook-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-16 21:03:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "HP2706",
    "github_project": "Automatic_Hook",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "automatic-hook"
}
        
HP
Elapsed time: 0.35054s