# Linear-Relational
[![ci](https://img.shields.io/github/actions/workflow/status/chanind/linear-relational/ci.yaml?branch=main)](https://github.com/chanind/linear-relational)
[![Codecov](https://img.shields.io/codecov/c/github/chanind/linear-relational/main)](https://codecov.io/gh/chanind/linear-relational)
[![PyPI](https://img.shields.io/pypi/v/linear-relational?color=blue)](https://pypi.org/project/linear-relational/)
Linear Relational Embeddings (LREs) and Linear Relational Concepts (LRCs) for LLMs using PyTorch and Huggingface Transformers.
Full docs: [https://chanind.github.io/linear-relational](https://chanind.github.io/linear-relational)
## About
This library provides utilities and PyTorch modules for working with LREs and LRCs. LREs estimate the relation between a subject and object in a transformer language model (LM) as a linear map.
This library assumes you're working with sentences with a subject, relation, and object. For instance, in the sentence: "Lyon is located in the country of France" would have the subject "Lyon", relation "located in country", and object "France". A LRE models a relation like "located in country" as a linear map consisting of a weight matrix $W$ and a bias term $b$, so a LRE would map from the activations of the subject (Lyon) at layer $l_s$ to the activations of the object (France) at layer $l_o$. So:
$$
LRE(s) = W s + b
$$
LREs can be inverted using a low-rank inverse, shown as $LRE^\{\dagger}$, to estimate $s$ from $o$:
$$
LRE^{\dagger}(o) = W^{\dagger}(o - b)
$$
Linear Relational Concepts (LRCs) represent a concept $(r, o)$ as a direction vector $v$ on subject tokens, and can act like a simple linear classifier. For instance, while a LRE can represent the relation "located in country", we could learn a LRC for "located in the country: France", "located in country: Germany", "located in country: China", etc... This is just the result from passing in an object activation into the inverse LRE equation above.
$$
v_{o} = W^{\dagger}(o - b)
$$
For more information on LREs and LRCs, check out the following papers:
- [Identifying Linear Relational Concepts in Large Language Models](https://arxiv.org/abs/2311.08968)
- [Linearity of Relation Decoding in Transformer Language Models](https://arxiv.org/abs/2308.09124)
## Installation
```
pip install linear-relational
```
## Usage
This library assumes you're using PyTorch with a decoder-only generative language model (e.g. GPT, LLaMa, etc...), and a tokenizer from Huggingface.
### Training a LRE
To train a LRE for a relation, first collect prompts which elicit the relation. We provide a `Prompt` class to represent this data, and a `Trainer` class to make training a LRE easy. Below, we train a LRE to represent the "located in country" relation.
```python
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
from linear_relational import Prompt, Trainer
# We load a generative LM from huggingface. The LMHead must be included.
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# Prompts consist of text, an answer, and subject.
# The subject must appear in the text. The answer
# is what the model should respond with, and corresponds to the "object"
prompts = [
Prompt("Paris is located in the country of", "France", subject="Paris"),
Prompt("Shanghai is located in the country of", "China", subject="Shanghai"),
Prompt("Kyoto is located in the country of", "Japan", subject="Kyoto"),
Prompt("San Jose is located in the country of", "Costa Rica", subject="San Jose"),
]
trainer = Trainer(model, tokenizer)
lre = trainer.train_lre(
relation="located in country",
subject_layer=8, # subject layer must be before the object layer
object_layer=10,
prompts=prompts,
)
```
### Working with a LRE
A LRE is a PyTorch module, so once a LRE is trained, we can use it to predict object activations from subject activations:
```python
object_acts_estimate = lre(subject_acts)
```
We can also create a low-rank estimate of the LRE:
```python
low_rank_lre = lre.to_low_rank(50)
low_rank_obj_acts_estimate = low_rank_lre(subject_acts)
```
Finally we can invert the LRE:
```python
inv_lre = lre.invert(rank=50)
subject_acts_estimate = inv_lre(object_acts)
```
### Training LRCs for a relation
The `Trainer` can also create LRCs for a relation. Internally, this first create a LRE, inverts it, then generates LRCs from each object in the relation. Objects refer to the answers in the prompts, e.g. in the example above, "France" is an object, "Japan" is an object, etc...
```python
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
from linear_relational import Prompt, Trainer
# We load a generative LM from huggingface. The LMHead must be included.
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# Prompts consist of text, an answer, and subject.
# The subject must appear in the text. The answer
# is what the model should respond with, and corresponds to the "object"
prompts = [
Prompt("Paris is located in the country of", "France", subject="Paris"),
Prompt("Shanghai is located in the country of", "China", subject="Shanghai"),
Prompt("Kyoto is located in the country of", "Japan", subject="Kyoto"),
Prompt("San Jose is located in the country of", "Costa Rica", subject="San Jose"),
]
trainer = Trainer(model, tokenizer)
concepts = trainer.train_relation_concepts(
relation="located in country",
subject_layer=8,
object_layer=10,
prompts=prompts,
max_lre_training_samples=10,
inv_lre_rank=50,
)
```
### Causal editing
Once we have LRCs trained, we can use them to perform causal edits while the model is running. For instance, we can perform a causal edit to make the model output that "Shanghai is located in the country of France" by subtracting the "located in country: China" concept from "Shanghai" and adding the "located in country: France" concept. We can use the `CausalEditor` class to perform these edits.
```python
from linear_relational import CausalEditor
concepts = trainer.train_relation_concepts(...)
editor = CausalEditor(model, tokenizer, concepts=concepts)
edited_answer = editor.swap_subject_concepts_and_predict_greedy(
text="Shanghai is located in the country of",
subject="Shanghai",
remove_concept="located in country: China",
add_concept="located in country: France",
edit_single_layer=8,
magnitude_multiplier=3.0,
predict_num_tokens=1,
)
print(edited_answer) # " France"
```
#### Single-layer vs multi-layer edits
Above we performed a single-layer edit, only modifying subject activations at layer 8. However, we may want to perform an edit at all subject layers at the same time instead. To do this, we can pass `edit_single_layer=False` to `editor.swap_subject_concepts_and_predict_greedy()`. We should also reduce the `magnitude_multiplier` since now we're going to make the edit at every layer, if we use too large of a multiplier we'll drown out the rest of the activations in the model. The `magnitude_multiplier` is a hyperparam that requires tuning depending on the model being edited.
```python
from linear_relational import CausalEditor
concepts = trainer.train_relation_concepts(...)
editor = CausalEditor(model, tokenizer, concepts=concepts)
edited_answer = editor.swap_subject_concepts_and_predict_greedy(
text="Shanghai is located in the country of",
subject="Shanghai",
remove_concept="located in country: China",
add_concept="located in country: France",
edit_single_layer=False,
magnitude_multiplier=0.1,
predict_num_tokens=1,
)
print(edited_answer) # " France"
```
### Concept matching
We can use learned concepts (LRCs) to act like classifiers and match them against subject activations in sentences. We can use the `ConceptMatcher` class to do this matching.
```python
from linear_relational import ConceptMatcher
concepts = trainer.train_relation_concepts(...)
matcher = ConceptMatcher(model, tokenizer, concepts=concepts)
match_info = matcher.query("Beijing is a northern city", subject="Beijing")
print(match_info.best_match.concept) # located in country: China
print(match_info.best_match.score) # 0.832
```
## Acknowledgements
This library is inspired by and uses modified code from the following excellent projects:
- [Locating and Editing Factual Associations in GPT](https://rome.baulab.info/)
- [Linearity of Relation Decoding in Transformer LMs](https://lre.baulab.info/)
## Contributing
Any contributions to improve this project are welcome! Please open an issue or pull request in this repo with any bugfixes / changes / improvements you have!
This project uses [Black](https://github.com/psf/black) for code formatting, [Flake8](https://flake8.pycqa.org/en/latest/) for linting, and [Pytest](https://docs.pytest.org/) for tests. Make sure any changes you submit pass these code checks in your PR. If you have trouble getting these to run feel free to open a pull-request regardless and we can discuss further in the PR.
## License
This code is released under a MIT license.
## Citation
If you use this library in your work, please cite the following:
```bibtex
@article{chanin2023identifying,
title={Identifying Linear Relational Concepts in Large Language Models},
author={David Chanin and Anthony Hunter and Oana-Maria Camburu},
journal={arXiv preprint arXiv:2311.08968},
year={2023}
}
```
Raw data
{
"_id": null,
"home_page": "https://chanind.github.io/linear-relational",
"name": "linear-relational",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "David Chanin",
"author_email": "chanindav@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/e9/a0/067ed6758c4d24dc5b536b6b89857e26a12798a9bff0b4aaa06d0c16b81b/linear_relational-0.6.2.tar.gz",
"platform": null,
"description": "# Linear-Relational\n\n[![ci](https://img.shields.io/github/actions/workflow/status/chanind/linear-relational/ci.yaml?branch=main)](https://github.com/chanind/linear-relational)\n[![Codecov](https://img.shields.io/codecov/c/github/chanind/linear-relational/main)](https://codecov.io/gh/chanind/linear-relational)\n[![PyPI](https://img.shields.io/pypi/v/linear-relational?color=blue)](https://pypi.org/project/linear-relational/)\n\nLinear Relational Embeddings (LREs) and Linear Relational Concepts (LRCs) for LLMs using PyTorch and Huggingface Transformers.\n\nFull docs: [https://chanind.github.io/linear-relational](https://chanind.github.io/linear-relational)\n\n## About\n\nThis library provides utilities and PyTorch modules for working with LREs and LRCs. LREs estimate the relation between a subject and object in a transformer language model (LM) as a linear map.\n\nThis library assumes you're working with sentences with a subject, relation, and object. For instance, in the sentence: \"Lyon is located in the country of France\" would have the subject \"Lyon\", relation \"located in country\", and object \"France\". A LRE models a relation like \"located in country\" as a linear map consisting of a weight matrix $W$ and a bias term $b$, so a LRE would map from the activations of the subject (Lyon) at layer $l_s$ to the activations of the object (France) at layer $l_o$. So:\n\n$$\nLRE(s) = W s + b\n$$\n\nLREs can be inverted using a low-rank inverse, shown as $LRE^\\{\\dagger}$, to estimate $s$ from $o$:\n\n$$\nLRE^{\\dagger}(o) = W^{\\dagger}(o - b)\n$$\n\nLinear Relational Concepts (LRCs) represent a concept $(r, o)$ as a direction vector $v$ on subject tokens, and can act like a simple linear classifier. For instance, while a LRE can represent the relation \"located in country\", we could learn a LRC for \"located in the country: France\", \"located in country: Germany\", \"located in country: China\", etc... This is just the result from passing in an object activation into the inverse LRE equation above.\n\n$$\nv_{o} = W^{\\dagger}(o - b)\n$$\n\nFor more information on LREs and LRCs, check out the following papers:\n\n- [Identifying Linear Relational Concepts in Large Language Models](https://arxiv.org/abs/2311.08968)\n- [Linearity of Relation Decoding in Transformer Language Models](https://arxiv.org/abs/2308.09124)\n\n## Installation\n\n```\npip install linear-relational\n```\n\n## Usage\n\nThis library assumes you're using PyTorch with a decoder-only generative language model (e.g. GPT, LLaMa, etc...), and a tokenizer from Huggingface.\n\n### Training a LRE\n\nTo train a LRE for a relation, first collect prompts which elicit the relation. We provide a `Prompt` class to represent this data, and a `Trainer` class to make training a LRE easy. Below, we train a LRE to represent the \"located in country\" relation.\n\n```python\nfrom transformers import GPT2LMHeadModel, GPT2TokenizerFast\nfrom linear_relational import Prompt, Trainer\n\n# We load a generative LM from huggingface. The LMHead must be included.\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\ntokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")\n\n# Prompts consist of text, an answer, and subject.\n# The subject must appear in the text. The answer\n# is what the model should respond with, and corresponds to the \"object\"\nprompts = [\n Prompt(\"Paris is located in the country of\", \"France\", subject=\"Paris\"),\n Prompt(\"Shanghai is located in the country of\", \"China\", subject=\"Shanghai\"),\n Prompt(\"Kyoto is located in the country of\", \"Japan\", subject=\"Kyoto\"),\n Prompt(\"San Jose is located in the country of\", \"Costa Rica\", subject=\"San Jose\"),\n]\n\ntrainer = Trainer(model, tokenizer)\n\nlre = trainer.train_lre(\n relation=\"located in country\",\n subject_layer=8, # subject layer must be before the object layer\n object_layer=10,\n prompts=prompts,\n)\n```\n\n### Working with a LRE\n\nA LRE is a PyTorch module, so once a LRE is trained, we can use it to predict object activations from subject activations:\n\n```python\nobject_acts_estimate = lre(subject_acts)\n```\n\nWe can also create a low-rank estimate of the LRE:\n\n```python\nlow_rank_lre = lre.to_low_rank(50)\nlow_rank_obj_acts_estimate = low_rank_lre(subject_acts)\n```\n\nFinally we can invert the LRE:\n\n```python\ninv_lre = lre.invert(rank=50)\nsubject_acts_estimate = inv_lre(object_acts)\n```\n\n### Training LRCs for a relation\n\nThe `Trainer` can also create LRCs for a relation. Internally, this first create a LRE, inverts it, then generates LRCs from each object in the relation. Objects refer to the answers in the prompts, e.g. in the example above, \"France\" is an object, \"Japan\" is an object, etc...\n\n```python\nfrom transformers import GPT2LMHeadModel, GPT2TokenizerFast\nfrom linear_relational import Prompt, Trainer\n\n# We load a generative LM from huggingface. The LMHead must be included.\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\ntokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")\n\n# Prompts consist of text, an answer, and subject.\n# The subject must appear in the text. The answer\n# is what the model should respond with, and corresponds to the \"object\"\nprompts = [\n Prompt(\"Paris is located in the country of\", \"France\", subject=\"Paris\"),\n Prompt(\"Shanghai is located in the country of\", \"China\", subject=\"Shanghai\"),\n Prompt(\"Kyoto is located in the country of\", \"Japan\", subject=\"Kyoto\"),\n Prompt(\"San Jose is located in the country of\", \"Costa Rica\", subject=\"San Jose\"),\n]\n\ntrainer = Trainer(model, tokenizer)\n\nconcepts = trainer.train_relation_concepts(\n relation=\"located in country\",\n subject_layer=8,\n object_layer=10,\n prompts=prompts,\n max_lre_training_samples=10,\n inv_lre_rank=50,\n)\n```\n\n### Causal editing\n\nOnce we have LRCs trained, we can use them to perform causal edits while the model is running. For instance, we can perform a causal edit to make the model output that \"Shanghai is located in the country of France\" by subtracting the \"located in country: China\" concept from \"Shanghai\" and adding the \"located in country: France\" concept. We can use the `CausalEditor` class to perform these edits.\n\n```python\nfrom linear_relational import CausalEditor\n\nconcepts = trainer.train_relation_concepts(...)\n\neditor = CausalEditor(model, tokenizer, concepts=concepts)\n\nedited_answer = editor.swap_subject_concepts_and_predict_greedy(\n text=\"Shanghai is located in the country of\",\n subject=\"Shanghai\",\n remove_concept=\"located in country: China\",\n add_concept=\"located in country: France\",\n edit_single_layer=8,\n magnitude_multiplier=3.0,\n predict_num_tokens=1,\n)\nprint(edited_answer) # \" France\"\n```\n\n#### Single-layer vs multi-layer edits\n\nAbove we performed a single-layer edit, only modifying subject activations at layer 8. However, we may want to perform an edit at all subject layers at the same time instead. To do this, we can pass `edit_single_layer=False` to `editor.swap_subject_concepts_and_predict_greedy()`. We should also reduce the `magnitude_multiplier` since now we're going to make the edit at every layer, if we use too large of a multiplier we'll drown out the rest of the activations in the model. The `magnitude_multiplier` is a hyperparam that requires tuning depending on the model being edited.\n\n```python\nfrom linear_relational import CausalEditor\n\nconcepts = trainer.train_relation_concepts(...)\n\neditor = CausalEditor(model, tokenizer, concepts=concepts)\n\nedited_answer = editor.swap_subject_concepts_and_predict_greedy(\n text=\"Shanghai is located in the country of\",\n subject=\"Shanghai\",\n remove_concept=\"located in country: China\",\n add_concept=\"located in country: France\",\n edit_single_layer=False,\n magnitude_multiplier=0.1,\n predict_num_tokens=1,\n)\nprint(edited_answer) # \" France\"\n```\n\n### Concept matching\n\nWe can use learned concepts (LRCs) to act like classifiers and match them against subject activations in sentences. We can use the `ConceptMatcher` class to do this matching.\n\n```python\nfrom linear_relational import ConceptMatcher\n\nconcepts = trainer.train_relation_concepts(...)\n\nmatcher = ConceptMatcher(model, tokenizer, concepts=concepts)\n\nmatch_info = matcher.query(\"Beijing is a northern city\", subject=\"Beijing\")\n\nprint(match_info.best_match.concept) # located in country: China\nprint(match_info.best_match.score) # 0.832\n```\n\n## Acknowledgements\n\nThis library is inspired by and uses modified code from the following excellent projects:\n\n- [Locating and Editing Factual Associations in GPT](https://rome.baulab.info/)\n- [Linearity of Relation Decoding in Transformer LMs](https://lre.baulab.info/)\n\n## Contributing\n\nAny contributions to improve this project are welcome! Please open an issue or pull request in this repo with any bugfixes / changes / improvements you have!\n\nThis project uses [Black](https://github.com/psf/black) for code formatting, [Flake8](https://flake8.pycqa.org/en/latest/) for linting, and [Pytest](https://docs.pytest.org/) for tests. Make sure any changes you submit pass these code checks in your PR. If you have trouble getting these to run feel free to open a pull-request regardless and we can discuss further in the PR.\n\n## License\n\nThis code is released under a MIT license.\n\n## Citation\n\nIf you use this library in your work, please cite the following:\n\n```bibtex\n@article{chanin2023identifying,\n title={Identifying Linear Relational Concepts in Large Language Models},\n author={David Chanin and Anthony Hunter and Oana-Maria Camburu},\n journal={arXiv preprint arXiv:2311.08968},\n year={2023}\n}\n```\n\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python library for working with Linear Relational Embeddings (LREs) and Linear Relational Concepts (LRCs) for LLMs",
"version": "0.6.2",
"project_urls": {
"Homepage": "https://chanind.github.io/linear-relational",
"Repository": "https://github.com/chanind/linear-relational"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0be335d46509ebf18cdbb108a36857245c060d8e514643c7c7357a0f7643221a",
"md5": "515d77640ce400f02e29993858cb5f5a",
"sha256": "de9dbe439517bd4c34bc61f2307ad12b24c56ac6290cde1799bd015dfbbf066c"
},
"downloads": -1,
"filename": "linear_relational-0.6.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "515d77640ce400f02e29993858cb5f5a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 32519,
"upload_time": "2024-08-07T22:02:59",
"upload_time_iso_8601": "2024-08-07T22:02:59.402461Z",
"url": "https://files.pythonhosted.org/packages/0b/e3/35d46509ebf18cdbb108a36857245c060d8e514643c7c7357a0f7643221a/linear_relational-0.6.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e9a0067ed6758c4d24dc5b536b6b89857e26a12798a9bff0b4aaa06d0c16b81b",
"md5": "3bdb06b95e15f57339d8b74ad05a7a81",
"sha256": "a9a2b9748d7887a30caf69b940c783e2d8b2caba205274e615fd29f01dbffb40"
},
"downloads": -1,
"filename": "linear_relational-0.6.2.tar.gz",
"has_sig": false,
"md5_digest": "3bdb06b95e15f57339d8b74ad05a7a81",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 26793,
"upload_time": "2024-08-07T22:03:00",
"upload_time_iso_8601": "2024-08-07T22:03:00.548151Z",
"url": "https://files.pythonhosted.org/packages/e9/a0/067ed6758c4d24dc5b536b6b89857e26a12798a9bff0b4aaa06d0c16b81b/linear_relational-0.6.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-07 22:03:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "chanind",
"github_project": "linear-relational",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "linear-relational"
}