lfm-torch


Namelfm-torch JSON
Version 0.0.3 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/lfm
Summarylfm - Pytorch
upload_time2024-10-24 23:41:48
maintainerNone
docs_urlNone
authorKye Gomez
requires_python<4.0,>=3.10
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# Liquid Foundation Models [LFMs]

[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)

**Welcome to the open-source implementation of Liquid Foundation Models (LFMs)** — pioneering the frontier of real-time learning in AI. LFMs are designed to adapt and learn on-the-fly, continuously evolving their knowledge and capabilities as they interact with new data. This real-time learning approach allows LFMs to stay current and relevant in rapidly changing environments, making them ideal for applications that require up-to-the-minute intelligence and adaptability. Whether processing streaming text, analyzing live audio, interpreting real-time video feeds, or responding to dynamic image inputs, LFMs excel at absorbing and applying new information instantaneously. [Discover more about the model from the original article](https://www.liquid.ai/liquid-foundation-models)

## Installation
```bash
$ pip3 install -U lfm-torch
```

## Usage

```python
import torch
from lfm_torch.model import LFModel
from loguru import logger

# Instantiate and test the model
if __name__ == "__main__":
    batch_size, seq_length, embedding_dim = 32, 128, 512
    token_dim, channel_dim, expert_dim, adapt_dim, num_experts = (
        embedding_dim,
        embedding_dim,
        embedding_dim,
        128,
        4,
    )
    model = LFModel(
        token_dim, channel_dim, expert_dim, adapt_dim, num_experts
    )

    input_tensor = torch.randn(
        batch_size, seq_length, embedding_dim
    )  # 3D text tensor
    output = model(input_tensor)
    logger.info("Model forward pass complete.")
```


## Liquid Transformer 
A novel neural architecture combining Liquid Neural Networks, Transformer attention mechanisms, and Mixture of Experts (MoE) for enhanced adaptive processing and dynamic state updates. Very experimental and early! We're working on a training script [here](./liquid_transformer_train.py). It still needs an actual tokenizer like llama's tokenizer but it's getting there. If you can help with this then let me know.


### Architecture Overview

```mermaid
flowchart TB
    subgraph "Liquid Transformer"
        Input["Input Sequence"] --> TL["Transformer Layer"]
        
        subgraph "Transformer Layer"
            direction TB
            MHA["Multi-Head Attention"] --> LC["Liquid Cell"]
            LC --> MOE["Mixture of Experts"]
            MOE --> LN["Layer Norm + Residual"]
        end
        
        subgraph "Liquid Cell Details"
            direction LR
            HS["Hidden State"] --> WH["W_h Linear"]
            Input2["Input"] --> WI["W_in Linear"]
            WH --> Add((+))
            WI --> Add
            Add --> Act["Activation"]
            Act --> LN2["LayerNorm"]
            LN2 --> DO["Dropout"]
        end
        
        subgraph "MoE Details"
            direction TB
            Input3["Input"] --> Gate["Gating Network"]
            Input3 --> E1["Expert 1"]
            Input3 --> E2["Expert 2"]
            Input3 --> E3["Expert N"]
            Gate --> Comb["Weighted Combination"]
            E1 --> Comb
            E2 --> Comb
            E3 --> Comb
        end
        
        TL --> Output["Output Sequence"]
    end
```



```python
import torch
from loguru import logger

from lfm_torch.liquid_t_moe import LiquidTransformer

# Example usage
if __name__ == "__main__":
    seq_len, batch_size, embed_size = 10, 2, 64
    num_heads, num_experts, expert_size, num_layers = 8, 4, 64, 6

    # Create the model
    model = LiquidTransformer(embed_size, num_heads, num_experts, expert_size, num_layers)

    # Example input tensor
    x = torch.randn(seq_len, batch_size, embed_size)

    # Forward pass
    output = model(x)
    logger.info(f"Model output shape: {output.shape}")
```


# Citations
- All credit for the liquid transformer architecture goes to the original authors: [Google](https://arxiv.org/abs/2402.05385)


# License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/lfm",
    "name": "lfm-torch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/42/a0/ecf68318e51d0c32f06b712d320d26d8ae1408d7133ce43f783d4286ab6b/lfm_torch-0.0.3.tar.gz",
    "platform": null,
    "description": "\n# Liquid Foundation Models [LFMs]\n\n[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)\n\n**Welcome to the open-source implementation of Liquid Foundation Models (LFMs)** \u2014 pioneering the frontier of real-time learning in AI. LFMs are designed to adapt and learn on-the-fly, continuously evolving their knowledge and capabilities as they interact with new data. This real-time learning approach allows LFMs to stay current and relevant in rapidly changing environments, making them ideal for applications that require up-to-the-minute intelligence and adaptability. Whether processing streaming text, analyzing live audio, interpreting real-time video feeds, or responding to dynamic image inputs, LFMs excel at absorbing and applying new information instantaneously. [Discover more about the model from the original article](https://www.liquid.ai/liquid-foundation-models)\n\n## Installation\n```bash\n$ pip3 install -U lfm-torch\n```\n\n## Usage\n\n```python\nimport torch\nfrom lfm_torch.model import LFModel\nfrom loguru import logger\n\n# Instantiate and test the model\nif __name__ == \"__main__\":\n    batch_size, seq_length, embedding_dim = 32, 128, 512\n    token_dim, channel_dim, expert_dim, adapt_dim, num_experts = (\n        embedding_dim,\n        embedding_dim,\n        embedding_dim,\n        128,\n        4,\n    )\n    model = LFModel(\n        token_dim, channel_dim, expert_dim, adapt_dim, num_experts\n    )\n\n    input_tensor = torch.randn(\n        batch_size, seq_length, embedding_dim\n    )  # 3D text tensor\n    output = model(input_tensor)\n    logger.info(\"Model forward pass complete.\")\n```\n\n\n## Liquid Transformer \nA novel neural architecture combining Liquid Neural Networks, Transformer attention mechanisms, and Mixture of Experts (MoE) for enhanced adaptive processing and dynamic state updates. Very experimental and early! We're working on a training script [here](./liquid_transformer_train.py). It still needs an actual tokenizer like llama's tokenizer but it's getting there. If you can help with this then let me know.\n\n\n### Architecture Overview\n\n```mermaid\nflowchart TB\n    subgraph \"Liquid Transformer\"\n        Input[\"Input Sequence\"] --> TL[\"Transformer Layer\"]\n        \n        subgraph \"Transformer Layer\"\n            direction TB\n            MHA[\"Multi-Head Attention\"] --> LC[\"Liquid Cell\"]\n            LC --> MOE[\"Mixture of Experts\"]\n            MOE --> LN[\"Layer Norm + Residual\"]\n        end\n        \n        subgraph \"Liquid Cell Details\"\n            direction LR\n            HS[\"Hidden State\"] --> WH[\"W_h Linear\"]\n            Input2[\"Input\"] --> WI[\"W_in Linear\"]\n            WH --> Add((+))\n            WI --> Add\n            Add --> Act[\"Activation\"]\n            Act --> LN2[\"LayerNorm\"]\n            LN2 --> DO[\"Dropout\"]\n        end\n        \n        subgraph \"MoE Details\"\n            direction TB\n            Input3[\"Input\"] --> Gate[\"Gating Network\"]\n            Input3 --> E1[\"Expert 1\"]\n            Input3 --> E2[\"Expert 2\"]\n            Input3 --> E3[\"Expert N\"]\n            Gate --> Comb[\"Weighted Combination\"]\n            E1 --> Comb\n            E2 --> Comb\n            E3 --> Comb\n        end\n        \n        TL --> Output[\"Output Sequence\"]\n    end\n```\n\n\n\n```python\nimport torch\nfrom loguru import logger\n\nfrom lfm_torch.liquid_t_moe import LiquidTransformer\n\n# Example usage\nif __name__ == \"__main__\":\n    seq_len, batch_size, embed_size = 10, 2, 64\n    num_heads, num_experts, expert_size, num_layers = 8, 4, 64, 6\n\n    # Create the model\n    model = LiquidTransformer(embed_size, num_heads, num_experts, expert_size, num_layers)\n\n    # Example input tensor\n    x = torch.randn(seq_len, batch_size, embed_size)\n\n    # Forward pass\n    output = model(x)\n    logger.info(f\"Model output shape: {output.shape}\")\n```\n\n\n# Citations\n- All credit for the liquid transformer architecture goes to the original authors: [Google](https://arxiv.org/abs/2402.05385)\n\n\n# License\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "lfm - Pytorch",
    "version": "0.0.3",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/lfm",
        "Homepage": "https://github.com/kyegomez/lfm",
        "Repository": "https://github.com/kyegomez/lfm"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " optimizers",
        " prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1f1f607e48e95cdcf2ac16d04e82cd4d58c583fcf5b9f003753f526f34029f7d",
                "md5": "d9b2f14c7370fba19298abd12e350bb1",
                "sha256": "46fca35907ef01e0622fe6485bb7d1dc6d19edc7a45e7f73c9de9a18ab73318c"
            },
            "downloads": -1,
            "filename": "lfm_torch-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d9b2f14c7370fba19298abd12e350bb1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 10033,
            "upload_time": "2024-10-24T23:41:46",
            "upload_time_iso_8601": "2024-10-24T23:41:46.627813Z",
            "url": "https://files.pythonhosted.org/packages/1f/1f/607e48e95cdcf2ac16d04e82cd4d58c583fcf5b9f003753f526f34029f7d/lfm_torch-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "42a0ecf68318e51d0c32f06b712d320d26d8ae1408d7133ce43f783d4286ab6b",
                "md5": "aead852e71008c027ca82dd16c88b3e1",
                "sha256": "50989c1e6261fd6aea70127ab3d390cff42ab60eec52bebe740c7ffa00c2f1b4"
            },
            "downloads": -1,
            "filename": "lfm_torch-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "aead852e71008c027ca82dd16c88b3e1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 10180,
            "upload_time": "2024-10-24T23:41:48",
            "upload_time_iso_8601": "2024-10-24T23:41:48.212115Z",
            "url": "https://files.pythonhosted.org/packages/42/a0/ecf68318e51d0c32f06b712d320d26d8ae1408d7133ce43f783d4286ab6b/lfm_torch-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-24 23:41:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "lfm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "lfm-torch"
}
        
Elapsed time: 0.41542s