local-sfmx


Namelocal-sfmx JSON
Version 0.0.4 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/LocalSoftmax
Summarylocal-sftmx - Pytorch
upload_time2023-09-29 04:13:56
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.6,<4.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# LocalSoftmax
Local Softmax parallelize the softmax computation by splitting the tensor into smaller sub-tensors and applying the softmax function on each of these smaller tensors independently. In other words, we want to compute a "local" softmax on each chunk of the tensor, instead of on the entire tensor.

# Appreciation
* Lucidrains
* Agorians



# Install
`pip install local-sftmx`


## Usage
```python
import torch
from local_sfmx import local_softmax

tensor = torch.rand(10, 5)
result = local_softmax(tensor, 2)
print(result)
```

# Algorithm
function LocalSoftmax(tensor, num_chunks):
    split tensors into `num_chunks` smaller tensors
    for each smaller tensor:
        apply standard softmax
    concatenate the results
    return concatenated tensor

# License
MIT


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/LocalSoftmax",
    "name": "local-sfmx",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6,<4.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/79/11/40c77140d775f750eeaa0c66b2fa980c1c8e66c3cbd5eb7c1b3fa277c66d/local_sfmx-0.0.4.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# LocalSoftmax\nLocal Softmax parallelize the softmax computation by splitting the tensor into smaller sub-tensors and applying the softmax function on each of these smaller tensors independently. In other words, we want to compute a \"local\" softmax on each chunk of the tensor, instead of on the entire tensor.\n\n# Appreciation\n* Lucidrains\n* Agorians\n\n\n\n# Install\n`pip install local-sftmx`\n\n\n## Usage\n```python\nimport torch\nfrom local_sfmx import local_softmax\n\ntensor = torch.rand(10, 5)\nresult = local_softmax(tensor, 2)\nprint(result)\n```\n\n# Algorithm\nfunction LocalSoftmax(tensor, num_chunks):\n    split tensors into `num_chunks` smaller tensors\n    for each smaller tensor:\n        apply standard softmax\n    concatenate the results\n    return concatenated tensor\n\n# License\nMIT\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "local-sftmx - Pytorch",
    "version": "0.0.4",
    "project_urls": {
        "Homepage": "https://github.com/kyegomez/LocalSoftmax",
        "Repository": "https://github.com/kyegomez/LocalSoftmax"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ab64ea6bac3a2204ded329774b333720db8b69f05ba3a7323dcfa267e0d58b1e",
                "md5": "7b488408a6488c4a9b337531ada2bf84",
                "sha256": "f951424a700eedaf780c2fd730ad71a0f4d1238aa0268600a7f70f4299a4370f"
            },
            "downloads": -1,
            "filename": "local_sfmx-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7b488408a6488c4a9b337531ada2bf84",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6,<4.0",
            "size": 4702,
            "upload_time": "2023-09-29T04:13:54",
            "upload_time_iso_8601": "2023-09-29T04:13:54.702535Z",
            "url": "https://files.pythonhosted.org/packages/ab/64/ea6bac3a2204ded329774b333720db8b69f05ba3a7323dcfa267e0d58b1e/local_sfmx-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "791140c77140d775f750eeaa0c66b2fa980c1c8e66c3cbd5eb7c1b3fa277c66d",
                "md5": "1dbb59fd974c2d73146c6e6f113b3632",
                "sha256": "e624b37aabce4a29d453dc113c41b7af09c0910b9a6f00bba1417de4905d2c09"
            },
            "downloads": -1,
            "filename": "local_sfmx-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "1dbb59fd974c2d73146c6e6f113b3632",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6,<4.0",
            "size": 4589,
            "upload_time": "2023-09-29T04:13:56",
            "upload_time_iso_8601": "2023-09-29T04:13:56.235004Z",
            "url": "https://files.pythonhosted.org/packages/79/11/40c77140d775f750eeaa0c66b2fa980c1c8e66c3cbd5eb7c1b3fa277c66d/local_sfmx-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-29 04:13:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "LocalSoftmax",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "local-sfmx"
}
        
Elapsed time: 0.19805s