nanollama


Namenanollama JSON
Version 0.0.5 PyPI version JSON
download
home_pagehttps://github.com/JosefAlbers/nanollama32
SummaryNano Llama
upload_time2025-02-14 21:40:54
maintainerNone
docs_urlNone
authorJosef Albers
requires_python>=3.12.8
licenseMIT
keywords
VCS
bugtrack_url
requirements mlx numpy huggingface-hub tiktoken
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # nanollama

A compact and efficient implementation of the Llama 3.2 in a single file, featuring minimal dependencies—**no transformers library required, even for tokenization**.

## Overview

`nanollama` provides a lightweight and straightforward implementation of the Llama model. It features:

- Minimal dependencies
- Easy-to-use interface
- Efficient performance suitable for various applications

## Quick Start

To get started, clone this repository and install the necessary packages. 

```zsh
pip install nanollama
```

Here’s a quick example of how to use `nanollama32`:

```python
>>> from nanollama import Chat

# Initialize the chat instance
>>> chat = Chat()

# Start a conversation
>>> chat("What's the weather like in Busan?")
# Llama responds with information about the weather

# Follow-up question that builds on the previous context
>>> chat("And how about the temperature?")
# Llama responds with the temperature, remembering the previous context

# Another follow-up, further utilizing context
>>> chat("What should I wear?")
# Llama suggests clothing based on the previous responses
```

## Command-Line Interface

You can also run `nanollama` from the command line:

```zsh
nlm how to create a new conda env
# Llama responds with ways to create a new conda environment and prompts the user for further follow-up questions
```

### Managing Chat History

- **--history**: Specify the path to the JSON file where chat history will be saved and/or loaded from. If the file does not exist, a new one will be created.
- **--resume**: Use this option to resume the conversation from a specific point in the chat history.

For example, you can specify `0` to resume from the most recent entry:

```zsh
nlm "and to list envs?" --resume 0
```

Or, you can resume from a specific entry in history:

```zsh
nlm "and to delete env?" --resume 20241026053144
```

### Adding Text from Files

You can include text from any number of external files by using the `{...}` syntax in your input. For example, if you have a text file named `langref.rst`, you can include its content in your input like this:

```zsh
nlm to create reddit bots {langref.rst}
```

## License

This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.

## Acknowledgements

This project builds upon the [MLX implementation](https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/models/llama.py) and [Karpathy's LLM.c implementation](https://github.com/karpathy/llm.c/blob/master/train_llama3.py) of the Llama model. Special thanks to the contributors of both projects for their outstanding work and inspiration.

## Contributing

Contributions are welcome! Feel free to submit issues or pull requests.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/JosefAlbers/nanollama32",
    "name": "nanollama",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Josef Albers",
    "author_email": "albersj66@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/26/6b/9b3f06608de6f878dc5a7a64c639e11c23faa37908cf8688cebe76342c51/nanollama-0.0.5.tar.gz",
    "platform": null,
    "description": "# nanollama\n\nA compact and efficient implementation of the Llama 3.2 in a single file, featuring minimal dependencies\u2014**no transformers library required, even for tokenization**.\n\n## Overview\n\n`nanollama` provides a lightweight and straightforward implementation of the Llama model. It features:\n\n- Minimal dependencies\n- Easy-to-use interface\n- Efficient performance suitable for various applications\n\n## Quick Start\n\nTo get started, clone this repository and install the necessary packages. \n\n```zsh\npip install nanollama\n```\n\nHere\u2019s a quick example of how to use `nanollama32`:\n\n```python\n>>> from nanollama import Chat\n\n# Initialize the chat instance\n>>> chat = Chat()\n\n# Start a conversation\n>>> chat(\"What's the weather like in Busan?\")\n# Llama responds with information about the weather\n\n# Follow-up question that builds on the previous context\n>>> chat(\"And how about the temperature?\")\n# Llama responds with the temperature, remembering the previous context\n\n# Another follow-up, further utilizing context\n>>> chat(\"What should I wear?\")\n# Llama suggests clothing based on the previous responses\n```\n\n## Command-Line Interface\n\nYou can also run `nanollama` from the command line:\n\n```zsh\nnlm how to create a new conda env\n# Llama responds with ways to create a new conda environment and prompts the user for further follow-up questions\n```\n\n### Managing Chat History\n\n- **--history**: Specify the path to the JSON file where chat history will be saved and/or loaded from. If the file does not exist, a new one will be created.\n- **--resume**: Use this option to resume the conversation from a specific point in the chat history.\n\nFor example, you can specify `0` to resume from the most recent entry:\n\n```zsh\nnlm \"and to list envs?\" --resume 0\n```\n\nOr, you can resume from a specific entry in history:\n\n```zsh\nnlm \"and to delete env?\" --resume 20241026053144\n```\n\n### Adding Text from Files\n\nYou can include text from any number of external files by using the `{...}` syntax in your input. For example, if you have a text file named `langref.rst`, you can include its content in your input like this:\n\n```zsh\nnlm to create reddit bots {langref.rst}\n```\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.\n\n## Acknowledgements\n\nThis project builds upon the [MLX implementation](https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/models/llama.py) and [Karpathy's LLM.c implementation](https://github.com/karpathy/llm.c/blob/master/train_llama3.py) of the Llama model. Special thanks to the contributors of both projects for their outstanding work and inspiration.\n\n## Contributing\n\nContributions are welcome! Feel free to submit issues or pull requests.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Nano Llama",
    "version": "0.0.5",
    "project_urls": {
        "Homepage": "https://github.com/JosefAlbers/nanollama32"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1201a616ead3939ad145e75e01eaceb76a3b2af6f075cdf9334450de121282b8",
                "md5": "ab59232f84af1e711eee1eea6adcbdad",
                "sha256": "7a605a4b5cf497981a76d982b79e875ce70de67afb58f818f954f548aa0a7659"
            },
            "downloads": -1,
            "filename": "nanollama-0.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ab59232f84af1e711eee1eea6adcbdad",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12.8",
            "size": 11523,
            "upload_time": "2025-02-14T21:40:52",
            "upload_time_iso_8601": "2025-02-14T21:40:52.185856Z",
            "url": "https://files.pythonhosted.org/packages/12/01/a616ead3939ad145e75e01eaceb76a3b2af6f075cdf9334450de121282b8/nanollama-0.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "266b9b3f06608de6f878dc5a7a64c639e11c23faa37908cf8688cebe76342c51",
                "md5": "8c758a4e97502e090b470c6db7a30885",
                "sha256": "95fd29d2b0b68dfac40cd1bc36faccdd7c54c5659bc2e362403d53b490a9aeb3"
            },
            "downloads": -1,
            "filename": "nanollama-0.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "8c758a4e97502e090b470c6db7a30885",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12.8",
            "size": 11227,
            "upload_time": "2025-02-14T21:40:54",
            "upload_time_iso_8601": "2025-02-14T21:40:54.178799Z",
            "url": "https://files.pythonhosted.org/packages/26/6b/9b3f06608de6f878dc5a7a64c639e11c23faa37908cf8688cebe76342c51/nanollama-0.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-14 21:40:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "JosefAlbers",
    "github_project": "nanollama32",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "mlx",
            "specs": [
                [
                    "==",
                    "0.22.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "2.2.2"
                ]
            ]
        },
        {
            "name": "huggingface-hub",
            "specs": [
                [
                    "==",
                    "0.28.1"
                ]
            ]
        },
        {
            "name": "tiktoken",
            "specs": [
                [
                    "==",
                    "0.8.0"
                ]
            ]
        }
    ],
    "lcname": "nanollama"
}
        
Elapsed time: 1.57142s