open-kimi


Nameopen-kimi JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/Open-Kimi
SummaryAn open source implementation of the Kimi K2 Reasoning model architecture in pure PyTorch
upload_time2025-11-07 22:14:52
maintainerNone
docs_urlNone
authorKye Gomez
requires_python<4.0,>=3.10
licenseMIT
keywords artificial intelligence deep learning kimi kimi-k2 k2-thinking moonshot-ai llms transformers pytorch language-models reasoning-models mixture-of-experts moe mla attention-mechanisms
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Open-Kimi-K2-Thinking

![Architecture kimi k2](image/arch.png)

This repository is a straightforward attempt to implement the base Kimi K2 Reasoning model architecture in pure PyTorch as simply as possible.

[Link](https://huggingface.co/moonshotai/Kimi-K2-Thinking)

## Install

```bash
pip3 install -U open-kimi
```

## Example

```python
from open_kimi.model import KimiK2
import torch

if __name__ == "__main__":
    model = KimiK2(
        dim=512,
        depth=2,
        attention_heads=8,
        experts=16,
        experts_per_token=4,
        seq_len=1024,
        lite_verison=True,
        vocab_size=10000,
    )

    x = torch.randint(0, 10000, (2, 1024))
    out = model(x)
    print(out)
```

## Full Example

```python
from open_kimi.model import KimiK2
import torch

if __name__ == "__main__":
    model = KimiK2(
        dim=7168,
        depth=61,
        attention_heads=64,
        experts=384,
        experts_per_token=8,
        seq_len=1024,
        lite_verison=False,
        vocab_size=160000,
    )

    x = torch.randint(0, 10000, (2, 7168))
    out = model(x)
    print(out)
```

## Kimi Linear

![Kimi Linear Architecture](image/linear.png)

Kimi Linear is a hybrid linear attention architecture that outperforms full attention under fair comparisons across various scenarios, including short-context, long-context, and reinforcement learning scaling regimes. At its core is **Kimi Delta Attention (KDA)**, an expressive linear attention module that extends Gated DeltaNet with a finer-grained gating mechanism, enabling more effective use of limited finite-state RNN memory. **Paper Link**: [Kimi Linear: An Expressive, Efficient Attention Architecture](https://huggingface.co/papers/2510.26692) (arXiv:2510.26692)

### Usage Example

```python
import torch
from open_kimi.kimi_linear import KimiLinear

if __name__ == "__main__":
    model = KimiLinear(
        dim=512,
        num_heads=8,
        head_dim=64,
        chunk_size=64,
        n_experts=16,
        n_activated=4,
        kda_layers=2,
        depth=2,
        vocab_size=10000,
        seq_len=1024,
    )

    x = torch.randint(0, 10000, (2, 1024))
    out = model(x)
    print(out)
    print(out.shape)
```

## Post Training

On the model huggingface page, they mention they use Native INT4 Quantization in the post training phase. So I would say a good post training recipe would include:

- Native INT4 Quantization
- MUON Optimizer
- GRPO

## Citation

```bibtex
@misc{moonshot-kimi-k2,
  title={Kimi K2 Thinking},
  author={Moonshot AI},
  year={2024},
  howpublished={\url{https://huggingface.co/moonshotai/Kimi-K2-Thinking}}
}
```

## Acknowledgments

This implementation is based on the architecture specifications published by Moonshot AI for the Kimi K2 Thinking model. Special thanks to the Moonshot AI team for making the model architecture details publicly available.

## Contact

For questions, issues, or contributions, please open an issue on the repository or contact the maintainers.

---

**Note**: This is an independent implementation based on publicly available specifications. It is not affiliated with or endorsed by Moonshot AI. For production use, please refer to the official model repository and weights.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/Open-Kimi",
    "name": "open-kimi",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, kimi, kimi-k2, k2-thinking, moonshot-ai, llms, transformers, pytorch, language-models, reasoning-models, mixture-of-experts, moe, mla, attention-mechanisms",
    "author": "Kye Gomez",
    "author_email": "kye@swarms.world",
    "download_url": "https://files.pythonhosted.org/packages/c3/ee/4d8b7be16ce86b926e6f4333a69ff2aa3eb9987701c7de0f46b60bba37dc/open_kimi-0.1.2.tar.gz",
    "platform": null,
    "description": "# Open-Kimi-K2-Thinking\n\n![Architecture kimi k2](image/arch.png)\n\nThis repository is a straightforward attempt to implement the base Kimi K2 Reasoning model architecture in pure PyTorch as simply as possible.\n\n[Link](https://huggingface.co/moonshotai/Kimi-K2-Thinking)\n\n## Install\n\n```bash\npip3 install -U open-kimi\n```\n\n## Example\n\n```python\nfrom open_kimi.model import KimiK2\nimport torch\n\nif __name__ == \"__main__\":\n    model = KimiK2(\n        dim=512,\n        depth=2,\n        attention_heads=8,\n        experts=16,\n        experts_per_token=4,\n        seq_len=1024,\n        lite_verison=True,\n        vocab_size=10000,\n    )\n\n    x = torch.randint(0, 10000, (2, 1024))\n    out = model(x)\n    print(out)\n```\n\n## Full Example\n\n```python\nfrom open_kimi.model import KimiK2\nimport torch\n\nif __name__ == \"__main__\":\n    model = KimiK2(\n        dim=7168,\n        depth=61,\n        attention_heads=64,\n        experts=384,\n        experts_per_token=8,\n        seq_len=1024,\n        lite_verison=False,\n        vocab_size=160000,\n    )\n\n    x = torch.randint(0, 10000, (2, 7168))\n    out = model(x)\n    print(out)\n```\n\n## Kimi Linear\n\n![Kimi Linear Architecture](image/linear.png)\n\nKimi Linear is a hybrid linear attention architecture that outperforms full attention under fair comparisons across various scenarios, including short-context, long-context, and reinforcement learning scaling regimes. At its core is **Kimi Delta Attention (KDA)**, an expressive linear attention module that extends Gated DeltaNet with a finer-grained gating mechanism, enabling more effective use of limited finite-state RNN memory. **Paper Link**: [Kimi Linear: An Expressive, Efficient Attention Architecture](https://huggingface.co/papers/2510.26692) (arXiv:2510.26692)\n\n### Usage Example\n\n```python\nimport torch\nfrom open_kimi.kimi_linear import KimiLinear\n\nif __name__ == \"__main__\":\n    model = KimiLinear(\n        dim=512,\n        num_heads=8,\n        head_dim=64,\n        chunk_size=64,\n        n_experts=16,\n        n_activated=4,\n        kda_layers=2,\n        depth=2,\n        vocab_size=10000,\n        seq_len=1024,\n    )\n\n    x = torch.randint(0, 10000, (2, 1024))\n    out = model(x)\n    print(out)\n    print(out.shape)\n```\n\n## Post Training\n\nOn the model huggingface page, they mention they use Native INT4 Quantization in the post training phase. So I would say a good post training recipe would include:\n\n- Native INT4 Quantization\n- MUON Optimizer\n- GRPO\n\n## Citation\n\n```bibtex\n@misc{moonshot-kimi-k2,\n  title={Kimi K2 Thinking},\n  author={Moonshot AI},\n  year={2024},\n  howpublished={\\url{https://huggingface.co/moonshotai/Kimi-K2-Thinking}}\n}\n```\n\n## Acknowledgments\n\nThis implementation is based on the architecture specifications published by Moonshot AI for the Kimi K2 Thinking model. Special thanks to the Moonshot AI team for making the model architecture details publicly available.\n\n## Contact\n\nFor questions, issues, or contributions, please open an issue on the repository or contact the maintainers.\n\n---\n\n**Note**: This is an independent implementation based on publicly available specifications. It is not affiliated with or endorsed by Moonshot AI. For production use, please refer to the official model repository and weights.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An open source implementation of the Kimi K2 Reasoning model architecture in pure PyTorch",
    "version": "0.1.2",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/Open-Kimi",
        "Homepage": "https://github.com/kyegomez/Open-Kimi",
        "Repository": "https://github.com/kyegomez/Open-Kimi"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " kimi",
        " kimi-k2",
        " k2-thinking",
        " moonshot-ai",
        " llms",
        " transformers",
        " pytorch",
        " language-models",
        " reasoning-models",
        " mixture-of-experts",
        " moe",
        " mla",
        " attention-mechanisms"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "59e7cc255ff9ef47a28397e84c1c9e052af5ea2749f50ad4cbafd12d06dc804d",
                "md5": "468d4843aeca42d90a876af6e53b1d55",
                "sha256": "8b5a7f498b398c79e7e04dfc189e027caf017db13823da185fea64d20702db6c"
            },
            "downloads": -1,
            "filename": "open_kimi-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "468d4843aeca42d90a876af6e53b1d55",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 16649,
            "upload_time": "2025-11-07T22:14:51",
            "upload_time_iso_8601": "2025-11-07T22:14:51.621777Z",
            "url": "https://files.pythonhosted.org/packages/59/e7/cc255ff9ef47a28397e84c1c9e052af5ea2749f50ad4cbafd12d06dc804d/open_kimi-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c3ee4d8b7be16ce86b926e6f4333a69ff2aa3eb9987701c7de0f46b60bba37dc",
                "md5": "e14116ce8010c1dfb00d17da7ec0918f",
                "sha256": "88aac253fe9a65e7f3056ba312ccb4d897b9bf247440cb4755af18b2461619f5"
            },
            "downloads": -1,
            "filename": "open_kimi-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "e14116ce8010c1dfb00d17da7ec0918f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 15218,
            "upload_time": "2025-11-07T22:14:52",
            "upload_time_iso_8601": "2025-11-07T22:14:52.619899Z",
            "url": "https://files.pythonhosted.org/packages/c3/ee/4d8b7be16ce86b926e6f4333a69ff2aa3eb9987701c7de0f46b60bba37dc/open_kimi-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-11-07 22:14:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "Open-Kimi",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "open-kimi"
}
        
Elapsed time: 4.53654s