# **llm-codegen-research**




<div>
<!-- badges from : https://shields.io/ -->
<!-- logos available : https://simpleicons.org/ -->
<a href="https://opensource.org/licenses/MIT">
<img alt="MIT License" src="https://img.shields.io/badge/Licence-MIT-yellow?style=for-the-badge&logo=docs&logoColor=white" />
</a>
<a href="https://www.python.org/">
<img alt="Python 3" src="https://img.shields.io/badge/Python_3-blue?style=for-the-badge&logo=python&logoColor=white" />
</a>
<a href="https://openai.com/blog/openai-api/">
<img alt="OpenAI API" src="https://img.shields.io/badge/OpenAI-412991?style=for-the-badge&logo=openai&logoColor=white" />
</a>
<a href="https://www.anthropic.com/api/">
<img alt="Anthropic API" src="https://img.shields.io/badge/Claude-D97757?style=for-the-badge&logo=claude&logoColor=white" />
</a>
<a href="https://api.together.ai/">
<img alt="together.ai API" src="https://img.shields.io/badge/together.ai-B5B5B5?style=for-the-badge&logoColor=white" />
</a>
<a href="https://docs.mistral.ai/api/">
<img alt="Mistral API" src="https://img.shields.io/badge/Mistral-FA520F?style=for-the-badge&logo=mistral&logoColor=white" />
</a>
<a href="https://api-docs.deepseek.com/">
<img alt="DeepSeek API" src="https://img.shields.io/badge/DeepSeek-003366?style=for-the-badge&logoColor=white" />
</a>
</div>
## *about*
A collection of methods and classes I repeatedly use when conducting research on LLM code-generation.
Covers both prompting various LLMs, and analysing the markdown responses.
## *installation*
Install directly from PyPI, using pip:
```shell
pip install llm-codegen-research
```
## *usage*
First configure environment vairables for the APIs you want to use:
```bash
export OPENAI_API_KEY=...
export ANTHROPIC_API_KEY=...
export TOGETHER_API_KEY=...
export MISTRAL_API_KEY=...
export DEEPSEEK_API_KEY=...
```
You can get a quick response from an LLM:
```python
from llm_cgr import generate, Markdown
response = generate("Write python code to generate the nth fibonacci number.")
markdown = Markdown(text=response)
```
Or define a client to generate multiple repsonses, or have a chat interaction:
```python
from llm_cgr import get_llm
# create the llm
llm = get_llm(
model="gpt-4.1-mini",
system="You're a really funny comedian.",
)
# get multiple responses and see the difference
responses = llm.generate(
user="Tell me a joke I haven't heard before!",
samples=3,
)
print(responses)
# or have a multi-prompt chat interaction
llm.chat(user="Tell me a knock knock joke?")
llm.chat(user="Wait, I'm meant to say who's there!")
print(llm.history)
```
## *development*
Clone the repository code:
```shell
git clone https://github.com/itsluketwist/llm-codegen-research.git
```
We use [`uv`](https://astral.sh/blog/uv) for project management.
Once cloned, create a virtual environment and install uv and the project:
```shell
python -m venv .venv
. .venv/bin/activate
pip install uv
uv sync
```
Use `make` commands to lint and test:
```shell
make lint
make test
```
Use `uv` to add new dependencies into the project and `uv.lock`:
```shell
uv add openai
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-codegen-research",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "llm, code-generation, research, prompting, nlp",
"author": null,
"author_email": "Lukas Twist <itsluketwist@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/00/08/00ca2c366bd5e294d042faa010ea0c2e302ed2f43f90b211723c9f5e6d9b/llm_codegen_research-2.1.tar.gz",
"platform": null,
"description": "# **llm-codegen-research**\n\n\n\n\n\n\n\n\n<div>\n <!-- badges from : https://shields.io/ -->\n <!-- logos available : https://simpleicons.org/ -->\n <a href=\"https://opensource.org/licenses/MIT\">\n <img alt=\"MIT License\" src=\"https://img.shields.io/badge/Licence-MIT-yellow?style=for-the-badge&logo=docs&logoColor=white\" />\n </a>\n <a href=\"https://www.python.org/\">\n <img alt=\"Python 3\" src=\"https://img.shields.io/badge/Python_3-blue?style=for-the-badge&logo=python&logoColor=white\" />\n </a>\n <a href=\"https://openai.com/blog/openai-api/\">\n <img alt=\"OpenAI API\" src=\"https://img.shields.io/badge/OpenAI-412991?style=for-the-badge&logo=openai&logoColor=white\" />\n </a>\n <a href=\"https://www.anthropic.com/api/\">\n <img alt=\"Anthropic API\" src=\"https://img.shields.io/badge/Claude-D97757?style=for-the-badge&logo=claude&logoColor=white\" />\n </a>\n <a href=\"https://api.together.ai/\">\n <img alt=\"together.ai API\" src=\"https://img.shields.io/badge/together.ai-B5B5B5?style=for-the-badge&logoColor=white\" />\n </a>\n <a href=\"https://docs.mistral.ai/api/\">\n <img alt=\"Mistral API\" src=\"https://img.shields.io/badge/Mistral-FA520F?style=for-the-badge&logo=mistral&logoColor=white\" />\n </a>\n <a href=\"https://api-docs.deepseek.com/\">\n <img alt=\"DeepSeek API\" src=\"https://img.shields.io/badge/DeepSeek-003366?style=for-the-badge&logoColor=white\" />\n </a>\n</div>\n\n## *about*\n\nA collection of methods and classes I repeatedly use when conducting research on LLM code-generation.\nCovers both prompting various LLMs, and analysing the markdown responses.\n\n## *installation*\n\nInstall directly from PyPI, using pip:\n\n```shell\npip install llm-codegen-research\n```\n\n## *usage*\n\nFirst configure environment vairables for the APIs you want to use:\n\n```bash\nexport OPENAI_API_KEY=...\nexport ANTHROPIC_API_KEY=...\nexport TOGETHER_API_KEY=...\nexport MISTRAL_API_KEY=...\nexport DEEPSEEK_API_KEY=...\n```\n\nYou can get a quick response from an LLM:\n\n```python\nfrom llm_cgr import generate, Markdown\n\nresponse = generate(\"Write python code to generate the nth fibonacci number.\")\n\nmarkdown = Markdown(text=response)\n```\n\nOr define a client to generate multiple repsonses, or have a chat interaction:\n\n```python\nfrom llm_cgr import get_llm\n\n# create the llm\nllm = get_llm(\n model=\"gpt-4.1-mini\",\n system=\"You're a really funny comedian.\",\n)\n\n# get multiple responses and see the difference\nresponses = llm.generate(\n user=\"Tell me a joke I haven't heard before!\",\n samples=3,\n)\nprint(responses)\n\n# or have a multi-prompt chat interaction\nllm.chat(user=\"Tell me a knock knock joke?\")\nllm.chat(user=\"Wait, I'm meant to say who's there!\")\nprint(llm.history)\n```\n\n## *development*\n\nClone the repository code:\n\n```shell\ngit clone https://github.com/itsluketwist/llm-codegen-research.git\n```\n\nWe use [`uv`](https://astral.sh/blog/uv) for project management.\nOnce cloned, create a virtual environment and install uv and the project:\n\n```shell\npython -m venv .venv\n\n. .venv/bin/activate\n\npip install uv\n\nuv sync\n```\n\nUse `make` commands to lint and test:\n\n```shell\nmake lint\n\nmake test\n```\n\nUse `uv` to add new dependencies into the project and `uv.lock`:\n\n```shell\nuv add openai\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "Useful classes and methods for researching code-generation by LLMs.",
"version": "2.1",
"project_urls": {
"Homepage": "https://github.com/itsluketwist/llm-codegen-research"
},
"split_keywords": [
"llm",
" code-generation",
" research",
" prompting",
" nlp"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "728ba6d65815df95c74c01e54ce780e2c1c417ea96254c9c259d7afb4f4bf1d6",
"md5": "135781e3e45cf0f8f79d89f8049340ee",
"sha256": "f5e7ba119940e10a430d4fd7c882105b2c1362be8e4d92d69988e02754848694"
},
"downloads": -1,
"filename": "llm_codegen_research-2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "135781e3e45cf0f8f79d89f8049340ee",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 19893,
"upload_time": "2025-08-01T10:44:10",
"upload_time_iso_8601": "2025-08-01T10:44:10.065290Z",
"url": "https://files.pythonhosted.org/packages/72/8b/a6d65815df95c74c01e54ce780e2c1c417ea96254c9c259d7afb4f4bf1d6/llm_codegen_research-2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "000800ca2c366bd5e294d042faa010ea0c2e302ed2f43f90b211723c9f5e6d9b",
"md5": "e6b229f075a90d8fcdbc0cc3fb50d1c1",
"sha256": "f72ae8c5ecf1200f08eb2f7c5b791c39b3c88227a5ec5d1b0cbcc543f0258945"
},
"downloads": -1,
"filename": "llm_codegen_research-2.1.tar.gz",
"has_sig": false,
"md5_digest": "e6b229f075a90d8fcdbc0cc3fb50d1c1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 17920,
"upload_time": "2025-08-01T10:44:11",
"upload_time_iso_8601": "2025-08-01T10:44:11.008934Z",
"url": "https://files.pythonhosted.org/packages/00/08/00ca2c366bd5e294d042faa010ea0c2e302ed2f43f90b211723c9f5e6d9b/llm_codegen_research-2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-01 10:44:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "itsluketwist",
"github_project": "llm-codegen-research",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llm-codegen-research"
}