llm-bench


Namellm-bench JSON
Version 0.4.32 PyPI version JSON
download
home_pagehttps://github.com/b-Snaas/ollama-benchmark.git
SummaryLLM Benchmarking tool for OLLAMA
upload_time2024-07-23 16:39:44
maintainerNone
docs_urlNone
authorSnaas
requires_python<4.0,>=3.8
licenseMIT
keywords benchmark llama ollama llms local
VCS
bugtrack_url
requirements anyio certifi charset-normalizer cli-exit-tools click colorama exceptiongroup fake-winreg GPUtil h11 httpcore httpx idna lib-detect-testenv lib-platform lib-registry markdown-it-py mdurl ollama psutil Pygments PyYAML requests rich shellingham sniffio typer typing_extensions urllib3 wrapt speedtest-cli
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llm-benchmark (ollama-benchmark)

LLM Benchmark for Throughput via Ollama (Local LLMs)

## Installation Steps

```bash
pip install llm-bench
```

## Usage for general users directly

```bash
llm_bench run
```

## ollama installation with the following models installed

7B model can be run on machines with 8GB of RAM

13B model can be run on machines with 16GB of RAM

## Usage explaination

On Windows, Linux, and macOS, it will detect memory RAM size to first download required LLM models.

When memory RAM size is greater than or equal to 4GB, but less than 7GB, it will check if gemma:2b exist. The program implicitly pull the model.

```bash
ollama pull gemma:2b
```

When memory RAM size is greater than 7GB, but less than 15GB, it will check if these models exist. The program implicitly pull these models

```bash
ollama pull gemma:2b
ollama pull gemma:7b
ollama pull mistral:7b
ollama pull llama2:7b
ollama pull llava:7b
```

When memory RAM siz is greater than 15GB, it will check if these models exist. The program implicitly pull these models

```bash
ollama pull gemma:2b
ollama pull gemma:7b
ollama pull mistral:7b
ollama pull llama2:7b
ollama pull llama2:13b
ollama pull llava:7b
ollama pull llava:13b
```

## Python Poetry manually(advanced) installation

<https://python-poetry.org/docs/#installing-manually>

## For developers to develop new features on Windows Powershell or on Ubuntu Linux or macOS

```bash
python3 -m venv .venv
. ./.venv/bin/activate
pip install -U pip setuptools
pip install poetry
```

## Usage in Python virtual environment

```bash
poetry shell
poetry install
llm_benchmark hello jason
```

### Example #1 send systeminfo and benchmark results to a remote server

```bash
llm_bench run
```

### Example #2 Do not send systeminfo and benchmark results to a remote server

```bash
llm_bench run --no-sendinfo
```

### Example #3 Benchmark run on explicitly given the path to the ollama executable (When you built your own developer version of ollama)

```bash
llm_bench run --ollamabin=~/code/ollama/ollama
```

## Reference

[Ollama](https://ollama.com)


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/b-Snaas/ollama-benchmark.git",
    "name": "llm-bench",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": "benchmark, llama, ollama, llms, local",
    "author": "Snaas",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/0b/93/24e32e3430f7d397862edbabb00a6fe09313b865dcc34aaa1cf433f3d7e2/llm_bench-0.4.32.tar.gz",
    "platform": null,
    "description": "# llm-benchmark (ollama-benchmark)\n\nLLM Benchmark for Throughput via Ollama (Local LLMs)\n\n## Installation Steps\n\n```bash\npip install llm-bench\n```\n\n## Usage for general users directly\n\n```bash\nllm_bench run\n```\n\n## ollama installation with the following models installed\n\n7B model can be run on machines with 8GB of RAM\n\n13B model can be run on machines with 16GB of RAM\n\n## Usage explaination\n\nOn Windows, Linux, and macOS, it will detect memory RAM size to first download required LLM models.\n\nWhen memory RAM size is greater than or equal to 4GB, but less than 7GB, it will check if gemma:2b exist. The program implicitly pull the model.\n\n```bash\nollama pull gemma:2b\n```\n\nWhen memory RAM size is greater than 7GB, but less than 15GB, it will check if these models exist. The program implicitly pull these models\n\n```bash\nollama pull gemma:2b\nollama pull gemma:7b\nollama pull mistral:7b\nollama pull llama2:7b\nollama pull llava:7b\n```\n\nWhen memory RAM siz is greater than 15GB, it will check if these models exist. The program implicitly pull these models\n\n```bash\nollama pull gemma:2b\nollama pull gemma:7b\nollama pull mistral:7b\nollama pull llama2:7b\nollama pull llama2:13b\nollama pull llava:7b\nollama pull llava:13b\n```\n\n## Python Poetry manually(advanced) installation\n\n<https://python-poetry.org/docs/#installing-manually>\n\n## For developers to develop new features on Windows Powershell or on Ubuntu Linux or macOS\n\n```bash\npython3 -m venv .venv\n. ./.venv/bin/activate\npip install -U pip setuptools\npip install poetry\n```\n\n## Usage in Python virtual environment\n\n```bash\npoetry shell\npoetry install\nllm_benchmark hello jason\n```\n\n### Example #1 send systeminfo and benchmark results to a remote server\n\n```bash\nllm_bench run\n```\n\n### Example #2 Do not send systeminfo and benchmark results to a remote server\n\n```bash\nllm_bench run --no-sendinfo\n```\n\n### Example #3 Benchmark run on explicitly given the path to the ollama executable (When you built your own developer version of ollama)\n\n```bash\nllm_bench run --ollamabin=~/code/ollama/ollama\n```\n\n## Reference\n\n[Ollama](https://ollama.com)\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "LLM Benchmarking tool for OLLAMA",
    "version": "0.4.32",
    "project_urls": {
        "Homepage": "https://github.com/b-Snaas/ollama-benchmark.git"
    },
    "split_keywords": [
        "benchmark",
        " llama",
        " ollama",
        " llms",
        " local"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6ee3ef108ae54c3e946f4aa84c08df42b00252212dc0eccd78c3944f0609904f",
                "md5": "08cada252d7343d3ab1502242888554d",
                "sha256": "f76d9fe5687ad72a7c5ad714e69a9f6e14bfbba11ba528ade9d49d044c340dd6"
            },
            "downloads": -1,
            "filename": "llm_bench-0.4.32-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "08cada252d7343d3ab1502242888554d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 19897,
            "upload_time": "2024-07-23T16:39:42",
            "upload_time_iso_8601": "2024-07-23T16:39:42.989713Z",
            "url": "https://files.pythonhosted.org/packages/6e/e3/ef108ae54c3e946f4aa84c08df42b00252212dc0eccd78c3944f0609904f/llm_bench-0.4.32-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0b9324e32e3430f7d397862edbabb00a6fe09313b865dcc34aaa1cf433f3d7e2",
                "md5": "4bc68ca95743dbf1a846853d067bd91c",
                "sha256": "e2ba3787f872dd96bf29fe9198b5e8981af2f3db6de3ceac05b4d1dc4e026a4b"
            },
            "downloads": -1,
            "filename": "llm_bench-0.4.32.tar.gz",
            "has_sig": false,
            "md5_digest": "4bc68ca95743dbf1a846853d067bd91c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 14003,
            "upload_time": "2024-07-23T16:39:44",
            "upload_time_iso_8601": "2024-07-23T16:39:44.412377Z",
            "url": "https://files.pythonhosted.org/packages/0b/93/24e32e3430f7d397862edbabb00a6fe09313b865dcc34aaa1cf433f3d7e2/llm_bench-0.4.32.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-23 16:39:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "b-Snaas",
    "github_project": "ollama-benchmark",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "anyio",
            "specs": [
                [
                    "==",
                    "4.3.0"
                ]
            ]
        },
        {
            "name": "certifi",
            "specs": [
                [
                    "==",
                    "2024.2.2"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    "==",
                    "3.3.2"
                ]
            ]
        },
        {
            "name": "cli-exit-tools",
            "specs": [
                [
                    "==",
                    "1.2.6"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    "==",
                    "8.1.7"
                ]
            ]
        },
        {
            "name": "colorama",
            "specs": [
                [
                    "==",
                    "0.4.6"
                ]
            ]
        },
        {
            "name": "exceptiongroup",
            "specs": [
                [
                    "==",
                    "1.2.0"
                ]
            ]
        },
        {
            "name": "fake-winreg",
            "specs": [
                [
                    "==",
                    "1.6.3"
                ]
            ]
        },
        {
            "name": "GPUtil",
            "specs": [
                [
                    "==",
                    "1.4.0"
                ]
            ]
        },
        {
            "name": "h11",
            "specs": [
                [
                    "==",
                    "0.14.0"
                ]
            ]
        },
        {
            "name": "httpcore",
            "specs": [
                [
                    "==",
                    "1.0.5"
                ]
            ]
        },
        {
            "name": "httpx",
            "specs": [
                [
                    "==",
                    "0.27.0"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    "==",
                    "3.6"
                ]
            ]
        },
        {
            "name": "lib-detect-testenv",
            "specs": [
                [
                    "==",
                    "2.0.8"
                ]
            ]
        },
        {
            "name": "lib-platform",
            "specs": [
                [
                    "==",
                    "1.2.10"
                ]
            ]
        },
        {
            "name": "lib-registry",
            "specs": [
                [
                    "==",
                    "2.0.10"
                ]
            ]
        },
        {
            "name": "markdown-it-py",
            "specs": [
                [
                    "==",
                    "3.0.0"
                ]
            ]
        },
        {
            "name": "mdurl",
            "specs": [
                [
                    "==",
                    "0.1.2"
                ]
            ]
        },
        {
            "name": "ollama",
            "specs": [
                [
                    "==",
                    "0.1.8"
                ]
            ]
        },
        {
            "name": "psutil",
            "specs": [
                [
                    "==",
                    "5.9.8"
                ]
            ]
        },
        {
            "name": "Pygments",
            "specs": [
                [
                    "==",
                    "2.17.2"
                ]
            ]
        },
        {
            "name": "PyYAML",
            "specs": [
                [
                    "==",
                    "6.0.1"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.31.0"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": [
                [
                    "==",
                    "13.7.1"
                ]
            ]
        },
        {
            "name": "shellingham",
            "specs": [
                [
                    "==",
                    "1.5.4"
                ]
            ]
        },
        {
            "name": "sniffio",
            "specs": [
                [
                    "==",
                    "1.3.1"
                ]
            ]
        },
        {
            "name": "typer",
            "specs": [
                [
                    "==",
                    "0.9.4"
                ]
            ]
        },
        {
            "name": "typing_extensions",
            "specs": [
                [
                    "==",
                    "4.10.0"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    "==",
                    "2.2.1"
                ]
            ]
        },
        {
            "name": "wrapt",
            "specs": [
                [
                    "==",
                    "1.16.0"
                ]
            ]
        },
        {
            "name": "speedtest-cli",
            "specs": [
                [
                    "==",
                    "2.1.3"
                ]
            ]
        }
    ],
    "lcname": "llm-bench"
}
        
Elapsed time: 0.30509s