# [LLMPop](https://pypi.org/project/llmpop/)
The Python library that lets you spin up any LLM with a single function.
#### Why did we need this library:
1. Needed a single simple command for any LLM, including the free local LLMs that Ollama offers.
2. Needed a better way for introducing a code library to a LLM that helps you build code. The `llmpop` library comes with a machine-readable file that is minimal and sufficent, see [**`LLM_READABLE_GUIDE.md`**](https://raw.githubusercontent.com/LiorGazit/llmpop/main/LLM_READABLE_GUIDE.md).
Add it to your conversation with the coding LLM and it will learn how to build code with `llmpop`. From a security aspect, this approach is safer then directing your LLM to read someone's entire codebase.
### Devs: [Lior Gazit](https://github.com/LiorGazit), and GPT5
Total hours spent in total on this project so far: `23 hours`
### Quick run of LLMPop:
Quickest on Colab:
<a target="_blank" href="https://colab.research.google.com/github/LiorGazit/llmpop/blob/main/examples/quick_run_llmpop.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Or if you want to set it up yourself, pick the free `T4 GPU`, and copy code over:
**Setup:**
```python
%pip -q install llmpop
from llmpop import init_llm
```
**Run:**
```python
# Start with Meta's Llama. If you want a stronger (and bigger) model, try OpenAI's free "gpt-oss:20b":
model = init_llm(model="llama3.2:1b", provider="ollama")
user_prompt = "What OS is better for deploying high scale programs in production? Linux, or Windows?"
print(model.invoke(user_prompt).content)
```
## Examples and tools built with LLMPop
- `../notebooks/`
- `../examples/`
## Features
- Plug-and-play local LLMs via Ollamaβno cloud or API costs required.
- Easy remote API support (OpenAI, extendable).
- Unified interface: Seamlessly switch between local and remote models in your code.
- Resource monitoring: Track CPU, memory, and (optionally) GPU usage while your agents run.
## Using LLMPop while coding with an LLM/chatbot
A dedicated, machine readable guide file, is designed to be the one single necessary file for a bot to get to know LLMPop and to build your code with it.
This guide file is [**`LLM_READABLE_GUIDE.md`**](https://raw.githubusercontent.com/LiorGazit/llmpop/main/LLM_READABLE_GUIDE.md)
So, either upload this file to your bot's conversation, or copy the file's content to paste for the bot's context, and it would allow your bot to leverage LLMPop as it builds code.
Note that this machine readable file is super useful in cases that your bot doesn't have access to the internet and can't learn about code libraries it wasn't trained on.
More on this guide file in `docs/index.md`
## Quick start via Colab
Start by running `run_ollama_in_colab.ipynb` in [Colab](https://colab.research.google.com/github/LiorGazit/llmpop/blob/main/examples/run_ollama_in_colab.ipynb).
π **Quick Guides**
- **Library usage (human-readable):** See [`LLM_READABLE_GUIDE.md`](./LLM_READABLE_GUIDE.md)
- **Full docs homepage:** See [`docs/index.md`](./docs/index.md)
## Codebase Structure
llmpop/
ββ .github/
β ββ workflows/
β ββ ci.yml
ββ docs/
β ββ index.md
ββ notebooks/
β ββ multi_llm_webapp.ipynb
ββ examples/
β ββ quick_run_llmpop.ipynb
β ββ quick_run_llmpop.py
β ββ run_ollama_in_colab.ipynb
ββ src/
β ββ llmpop/
β ββ \_\_init\_\_.py
β ββ init_llm.py
β ββ monitor_resources.py
β ββ py.typed
β ββ version.py
ββ tests/
β ββ test_init_llm.py
β ββ test_llm_readable_guide.py
β ββ test_monitor_resources.py
ββ .gitignore
ββ .pre-commit-config.yaml
ββ CHANGELOG.md
ββ CODE_OF_CONDUCT.md
ββ CONTRIBUTING.md
ββ DEVLOG.md
ββ LICENSE
ββ LLM_READABLE_GUIDE.md
ββ Makefile
ββ MANIFEST.in
ββ pyproject.toml
ββ README.md
ββ requirements-dev.txt
ββ requirements.txt
Where:
β’ `src/` layout is the modern standard for packaging.
β’ `tests/` use pytest; weβll mock shell/network so CI doesnβt try to actually install/run Ollama.
β’ `examples/` contains notebooks users can run locally/Colab.
β’ `docs/` is optional now; you can add mkdocs later.
β’ `CI` runs lint + unit tests on pushes and PRs.
β’ `CHANGELOG` follows Keep a Changelog; DEVLOG is your running engineering journal.
## Quick setting up
1. Install from GitHub
`pip -q install llmpop`
2. Try it
```python
from llmpop import init_llm, start_resource_monitoring
from langchain_core.prompts import ChatPromptTemplate
model = init_llm(model="gemma3:1b", provider="ollama")
# Or:
# os.environ["OPENAI_API_KEY"] = "sk-..."
# model = init_llm(chosen_llm="gpt-4o", provider="openai")
prompt = ChatPromptTemplate.from_template("Q: {q}\nA:")
print((prompt | model).invoke({"q":"What is an agent?"}).content)
```
3. Optional - Resource Monitoring
```python
monitor_thread = start_resource_monitoring(duration=600, interval=10)
```
Enjoy!
Raw data
{
"_id": null,
"home_page": null,
"name": "llmpop",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "LLM, Ollama, LangChain, Agents, OpenAI, Router",
"author": "Lior Gazit",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/f6/a7/b38be93bb1e70a85966a6e649493d8247d881c10a1cea7823e81e730823b/llmpop-0.3.tar.gz",
"platform": null,
"description": "# [LLMPop](https://pypi.org/project/llmpop/)\r\nThe Python library that lets you spin up any LLM with a single function. \r\n#### Why did we need this library: \r\n1. Needed a single simple command for any LLM, including the free local LLMs that Ollama offers. \r\n2. Needed a better way for introducing a code library to a LLM that helps you build code. The `llmpop` library comes with a machine-readable file that is minimal and sufficent, see [**`LLM_READABLE_GUIDE.md`**](https://raw.githubusercontent.com/LiorGazit/llmpop/main/LLM_READABLE_GUIDE.md). \r\n Add it to your conversation with the coding LLM and it will learn how to build code with `llmpop`. From a security aspect, this approach is safer then directing your LLM to read someone's entire codebase. \r\n\r\n### Devs: [Lior Gazit](https://github.com/LiorGazit), and GPT5 \r\nTotal hours spent in total on this project so far: `23 hours` \r\n\r\n### Quick run of LLMPop: \r\nQuickest on Colab: \r\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/LiorGazit/llmpop/blob/main/examples/quick_run_llmpop.ipynb\">\r\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\r\n</a> \r\nOr if you want to set it up yourself, pick the free `T4 GPU`, and copy code over: \r\n**Setup:** \r\n```python\r\n%pip -q install llmpop \r\nfrom llmpop import init_llm\r\n``` \r\n**Run:** \r\n```python\r\n# Start with Meta's Llama. If you want a stronger (and bigger) model, try OpenAI's free \"gpt-oss:20b\":\r\nmodel = init_llm(model=\"llama3.2:1b\", provider=\"ollama\")\r\nuser_prompt = \"What OS is better for deploying high scale programs in production? Linux, or Windows?\"\r\nprint(model.invoke(user_prompt).content)\r\n```\r\n\r\n## Examples and tools built with LLMPop \r\n- `../notebooks/` \r\n- `../examples/` \r\n\r\n## Features\r\n- Plug-and-play local LLMs via Ollama\u2014no cloud or API costs required. \r\n- Easy remote API support (OpenAI, extendable). \r\n- Unified interface: Seamlessly switch between local and remote models in your code. \r\n- Resource monitoring: Track CPU, memory, and (optionally) GPU usage while your agents run. \r\n\r\n## Using LLMPop while coding with an LLM/chatbot \r\nA dedicated, machine readable guide file, is designed to be the one single necessary file for a bot to get to know LLMPop and to build your code with it. \r\nThis guide file is [**`LLM_READABLE_GUIDE.md`**](https://raw.githubusercontent.com/LiorGazit/llmpop/main/LLM_READABLE_GUIDE.md) \r\nSo, either upload this file to your bot's conversation, or copy the file's content to paste for the bot's context, and it would allow your bot to leverage LLMPop as it builds code. \r\nNote that this machine readable file is super useful in cases that your bot doesn't have access to the internet and can't learn about code libraries it wasn't trained on. \r\nMore on this guide file in `docs/index.md` \r\n\r\n## Quick start via Colab\r\nStart by running `run_ollama_in_colab.ipynb` in [Colab](https://colab.research.google.com/github/LiorGazit/llmpop/blob/main/examples/run_ollama_in_colab.ipynb). \r\n\r\n\ud83d\udcd6 **Quick Guides**\r\n- **Library usage (human-readable):** See [`LLM_READABLE_GUIDE.md`](./LLM_READABLE_GUIDE.md) \r\n- **Full docs homepage:** See [`docs/index.md`](./docs/index.md) \r\n\r\n\r\n## Codebase Structure \r\nllmpop/ \r\n\u251c\u2500 .github/ \r\n\u2502 \u2514\u2500 workflows/ \r\n\u2502 \u2514\u2500 ci.yml \r\n\u251c\u2500 docs/ \r\n\u2502 \u2514\u2500 index.md \r\n\u251c\u2500 notebooks/ \r\n\u2502 \u2514\u2500 multi_llm_webapp.ipynb \r\n\u251c\u2500 examples/ \r\n\u2502 \u251c\u2500 quick_run_llmpop.ipynb \r\n\u2502 \u251c\u2500 quick_run_llmpop.py \r\n\u2502 \u2514\u2500 run_ollama_in_colab.ipynb \r\n\u251c\u2500 src/ \r\n\u2502 \u2514\u2500 llmpop/ \r\n\u2502 \u251c\u2500 \\_\\_init\\_\\_.py \r\n\u2502 \u251c\u2500 init_llm.py \r\n\u2502 \u251c\u2500 monitor_resources.py \r\n\u2502 \u251c\u2500 py.typed \r\n\u2502 \u2514\u2500 version.py \r\n\u251c\u2500 tests/ \r\n\u2502 \u251c\u2500 test_init_llm.py \r\n\u2502 \u251c\u2500 test_llm_readable_guide.py \r\n\u2502 \u2514\u2500 test_monitor_resources.py \r\n\u251c\u2500 .gitignore \r\n\u251c\u2500 .pre-commit-config.yaml \r\n\u251c\u2500 CHANGELOG.md \r\n\u251c\u2500 CODE_OF_CONDUCT.md \r\n\u251c\u2500 CONTRIBUTING.md \r\n\u251c\u2500 DEVLOG.md \r\n\u251c\u2500 LICENSE \r\n\u251c\u2500 LLM_READABLE_GUIDE.md \r\n\u251c\u2500 Makefile \r\n\u251c\u2500 MANIFEST.in \r\n\u251c\u2500 pyproject.toml \r\n\u251c\u2500 README.md \r\n\u251c\u2500 requirements-dev.txt \r\n\u2514\u2500 requirements.txt \r\n\r\nWhere: \r\n\u2022 `src/` layout is the modern standard for packaging. \r\n\u2022 `tests/` use pytest; we\u2019ll mock shell/network so CI doesn\u2019t try to actually install/run Ollama. \r\n\u2022 `examples/` contains notebooks users can run locally/Colab. \r\n\u2022 `docs/` is optional now; you can add mkdocs later. \r\n\u2022 `CI` runs lint + unit tests on pushes and PRs. \r\n\u2022 `CHANGELOG` follows Keep a Changelog; DEVLOG is your running engineering journal. \r\n\r\n## Quick setting up \r\n1. Install from GitHub \r\n`pip -q install llmpop` \r\n\r\n2. Try it \r\n ```python\r\n from llmpop import init_llm, start_resource_monitoring\r\n from langchain_core.prompts import ChatPromptTemplate\r\n\r\n model = init_llm(model=\"gemma3:1b\", provider=\"ollama\")\r\n # Or:\r\n # os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\r\n # model = init_llm(chosen_llm=\"gpt-4o\", provider=\"openai\")\r\n\r\n prompt = ChatPromptTemplate.from_template(\"Q: {q}\\nA:\")\r\n print((prompt | model).invoke({\"q\":\"What is an agent?\"}).content)\r\n ```\r\n\r\n 3. Optional - Resource Monitoring\r\n ```python\r\n monitor_thread = start_resource_monitoring(duration=600, interval=10)\r\n ```\r\n\r\nEnjoy!\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "The Python library that lets you spin up any LLM with a single function. Route between local and remote LLMs with a unified interface.",
"version": "0.3",
"project_urls": {
"Homepage": "https://github.com/LiorGazit/llmpop",
"Issues": "https://github.com/LiorGazit/llmpop/issues"
},
"split_keywords": [
"llm",
" ollama",
" langchain",
" agents",
" openai",
" router"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "433def9663b077b61ca05e48de9e72d7c8df20de725ee9ace699ee811e84f98c",
"md5": "70a24be07ea2896f64837fef55fac7f0",
"sha256": "cc9dece014be1464ebf693217a20e7c75bf2103c20cb152c04ea6d754ef32a70"
},
"downloads": -1,
"filename": "llmpop-0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "70a24be07ea2896f64837fef55fac7f0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 8673,
"upload_time": "2025-10-26T15:54:50",
"upload_time_iso_8601": "2025-10-26T15:54:50.178370Z",
"url": "https://files.pythonhosted.org/packages/43/3d/ef9663b077b61ca05e48de9e72d7c8df20de725ee9ace699ee811e84f98c/llmpop-0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f6a7b38be93bb1e70a85966a6e649493d8247d881c10a1cea7823e81e730823b",
"md5": "e4a82c943e9cd8fa4843651b8b721721",
"sha256": "4dd6bcd7c8d88fec183fe95e5d2acd2aafe218bf0b9db767b19102716079b3d6"
},
"downloads": -1,
"filename": "llmpop-0.3.tar.gz",
"has_sig": false,
"md5_digest": "e4a82c943e9cd8fa4843651b8b721721",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 11992,
"upload_time": "2025-10-26T15:54:51",
"upload_time_iso_8601": "2025-10-26T15:54:51.231495Z",
"url": "https://files.pythonhosted.org/packages/f6/a7/b38be93bb1e70a85966a6e649493d8247d881c10a1cea7823e81e730823b/llmpop-0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-26 15:54:51",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LiorGazit",
"github_project": "llmpop",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "shutil",
"specs": []
},
{
"name": "subprocess",
"specs": []
},
{
"name": "requests",
"specs": []
},
{
"name": "langchain-ollama",
"specs": []
},
{
"name": "langchain-openai",
"specs": []
},
{
"name": "pytest",
"specs": []
}
],
"lcname": "llmpop"
}