llmprototyping


Namellmprototyping JSON
Version 0.1.0.dev6 PyPI version JSON
download
home_pagehttps://github.com/alejandrolc/llmprototyping
SummaryA lightweight set of tools to use several llm and embeddings apis
upload_time2024-08-25 18:12:00
maintainerNone
docs_urlNone
authorAlejandro López Correa
requires_python<4,>=3.9
licenseNone
keywords llm rag openai groq ollama anthropic
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llmprototyping

`llmprototyping` is a Python package designed to provide easy and uniform access to various large language model (LLM) and embedding APIs, along with basic functionality for building small-scale artificial intelligence applications.

## Features

- **Uniform API Access**: Simplify your interactions with different LLM and embedding APIs using a single interface.
- **Basic AI Application Tools**: Get started quickly with tools designed to support the development of AI applications.

## License

Apache License Version 2.0

## Compatibility

python 3.9+

## Installation

```bash
pip install llmprototyping
```

## Available models

### chat completion models

- groq/llama-3.1-70b-versatile
- groq/llama-3.1-8b-instant
- groq/llama3-70b-8192
- groq/llama3-8b-8192
- groq/mixtral-8x7b-32768
- groq/gemma2-9b-it
- groq/gemma-7b-it
- openai/gpt-4o-mini
- openai/gpt-4o
- openai/gpt-4-turbo
- openai/gpt-4-turbo-preview
- openai/gpt-3.5-turbo
- anthropic/claude-3-opus-20240229
- anthropic/claude-3-sonnet-20240229
- anthropic/claude-3-haiku-20240307
- ollama/*

### embedding models

- openai/text-embedding-3-small
- openai/text-embedding-3-large
- openai/text-embedding-ada-002
- ollama/*

## Usage

Note: The examples use python-dotenv. This is not required by llmprototyping, so it needs to be installed separatedly. 

### Simple chat completion call

```python
import os
from dotenv import load_dotenv
load_dotenv()

groq_api_key = os.getenv('GROQ_API_KEY')

import llmprototyping as llmp
factory = llmp.LLMChatCompletionFactory
model = factory.build('groq/llama3-70b-8192', {'api_key': groq_api_key})
user_msg = llmp.Message(content="Please give me a list of ten colours and some place that is related to each one.")
sys_msg = llmp.Message(content="Provide an answer in json", role="system")
resp = model.query([user_msg,sys_msg], json_response=True, temperature=0)
resp.show()
```

<details>
  <summary>Output</summary>

```
Response successful; tokens: i:43 o:145 message:
Message role:assistant content:
{
"colours": [
{"colour": "Red", "place": "Rome"},
{"colour": "Orange", "place": "Netherlands"},
{"colour": "Yellow", "place": "Sunshine Coast"},
{"colour": "Green", "place": "Emerald Isle"},
{"colour": "Blue", "place": "Blue Mountains"},
{"colour": "Indigo", "place": "Indigo Bay"},
{"colour": "Violet", "place": "Violet Hill"},
{"colour": "Pink", "place": "Pink Sands Beach"},
{"colour": "Brown", "place": "Brown County"},
{"colour": "Grey", "place": "Greytown"}
]
}
```
</details>

### List available models

```python
import llmprototyping as llmp

print('chat completion models:')
for model_name in llmp.LLMChatCompletionFactory.available_models:
    print(f"  {model_name}")

print('embedding models:')
for model_name in llmp.EmbeddingComputerFactory.available_models:
    print(f"  {model_name}")
```

### Embeddings example: search

```python
knowledge_list = [
    "Rome was founded in 753 BCE according to tradition, by Romulus and Remus.",
    "The Roman Republic was established in 509 BCE after overthrowing the last Etruscan kings.",
    "Julius Caesar became the perpetual dictator in 44 BCE, shortly before his assassination.",
    "The Roman Empire officially began when Octavian received the title of Augustus in 27 BCE.",
    "At its peak, the Roman Empire extended from Hispania to Mesopotamia.",
    "The capital of the Empire was moved to Constantinople by Constantine I in 330.",
    "The fall of Rome occurred in 476 CE when the last Western Roman emperor, Romulus Augustulus, was deposed.",
    "Roman culture greatly influenced law, politics, language, and architecture in the Western world.",
    "The expansion of Christianity as the official religion was promoted by Constantine after the Battle of the Milvian Bridge in 312.",
    "Roman society was heavily stratified between patricians, plebeians, and slaves."
]

question = "What is the name of the last emperor?"

import os
from dotenv import load_dotenv
load_dotenv()

openai_api_key = os.getenv('OPENAI_API_KEY')

import llmprototyping as llmp

import shelve
db = shelve.open('test_embeddings.db')

def get_embedding(text, computer):
    if text in db:
        json = db[text]
        return llmp.EmbeddingVector.from_json(json)

    print(f'computing embedding for "{text}"')
    em = computer.get_embedding(text)
    db[text] = em.to_json()

    return em

factory = llmp.EmbeddingComputerFactory
computer = factory.build('openai/text-embedding-3-small', {'api_key': openai_api_key})        

entry_table = dict()

for entry_id, entry_text in enumerate(knowledge_list):
    em = get_embedding(entry_text, computer)
    entry_table[entry_id] = em

em_question = get_embedding(question, computer)

vdb = llmp.FAISSDatabase(embedding_type=computer.model_name, embedding_size=computer.vector_size)
vdb.put_records(entry_table)

print(f"query: {question}")
results = vdb.search(em_question)
for distance, entry_id in results:
    print(f"{distance:.3f} {entry_id} {knowledge_list[entry_id]}")
```

<details>
  <summary>Output</summary>

```
computing embedding for "What is the name of the last emperor?"
query: What is the name of the last emperor?
1.105 6 The fall of Rome occurred in 476 CE when the last Western Roman emperor, Romulus Augustulus, was deposed.
1.337 2 Julius Caesar became the perpetual dictator in 44 BCE, shortly before his assassination.
1.457 3 The Roman Empire officially began when Octavian received the title of Augustus in 27 BCE.
1.522 5 The capital of the Empire was moved to Constantinople by Constantine I in 330.
1.559 1 The Roman Republic was established in 509 BCE after overthrowing the last Etruscan kings.
```

Values for distances may vary depending on the actual embeddings computed.
</details>

### Ollama example: chat

OLLAMA_HOST is the uri of the host, e.g. http://192.168.1.2:11434

It requires a call to ollama_discover to register the models in the server

ollama_pull can be used to pull a model

```python
import os
from dotenv import load_dotenv
load_dotenv()

ollama_host = os.getenv('OLLAMA_HOST')

import llmprototyping as llmp

llmp.ollama_discover(host=ollama_host)
llmp.ollama_pull_model(host=ollama_host, model_name='phi3')

print('chat completion models:')
for model_name in llmp.LLMChatCompletionFactory.available_models:
    print(f"  {model_name}")
print()

factory = llmp.LLMChatCompletionFactory
model = factory.build('ollama/phi3')
user_msg = llmp.Message(content="Please give me a list of ten colours and some place that is related to each one.")
sys_msg = llmp.Message(content="Provide an answer in json", role="system")
resp = model.query([user_msg,sys_msg], json_response=True, temperature=0)

resp.show_header()
print(resp.message.content)
```

<details>
  <summary>Output</summary>

```
chat completion models:
  groq/llama3-70b-8192
  groq/llama3-8b-8192
  groq/mixtral-8x7b-32768
  groq/gemma-7b-it
  openai/gpt-4o
  openai/gpt-4-turbo
  openai/gpt-4-turbo-preview
  openai/gpt-3.5-turbo
  anthropic/claude-3-opus-20240229
  anthropic/claude-3-sonnet-20240229
  anthropic/claude-3-haiku-20240307
  ollama/phi3:latest
  ollama/phi3

Response successful; tokens: i:40 o:189
{
  "Colours": [
    {"Red": "The Eiffel Tower, Paris"},
    {"Blue": "Pacific Ocean near Hawaii"},
    {"Green": "Yellowstone National Park, USA"},
    {"Orange": "Sunset at the Grand Canyon, Arizona"},
    {"White": "Mt. Everest Base Camp, Nepal"},
    {"Black": "The Great Barrier Reef, Australia (night diving)"},
    {"Purple": "Royal Palace of Caserta, Italy"},
    {"Gray": "Snowy landscapes in the Swiss Alps"},
    {"Brown": "Amazon Rainforest, Brazil"},
    {"Yellow": "Kilimanjaro's snow-capped peak, Tanzania"}
  ]
}
```
</details>

### Templates example

Write templates.txt file:
```
# template: question_yesno_json_sys
# role: system

Answer the question with any of these responses: yes, no, unknown, ambiguous.
Respond in json using this schema:
{ "answer": "..." }

# template: question_yesno_user
# role: user

{{question}}

# template: extract_keywords_json_sys
# role: system

Extract keywords from the provided text.
Respond in json using this schema:
{ "keywords": ["keyword1", "keyword2", ...] }
```

Use it in code:

```python
import llmprototyping as llmp

template_repo = llmp.TemplateFileRepository("templates.txt")
msg_sys = template_repo.render_message('question_yesno_json_sys', {})
msg_user = template_repo.render_message('question_yesno_user', {"question": "Is 1+1 = 2?"})

# model is an LLMChatCompletion object, see examples above
resp = model.query([msg_sys,msg_text], json_response=True)
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/alejandrolc/llmprototyping",
    "name": "llmprototyping",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4,>=3.9",
    "maintainer_email": null,
    "keywords": "llm, rag, openai, groq, ollama, anthropic",
    "author": "Alejandro L\u00f3pez Correa",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/7b/5e/fc5b9f2b4f2c652873210b293965c285c358abae911d2624f2051efd2513/llmprototyping-0.1.0.dev6.tar.gz",
    "platform": null,
    "description": "# llmprototyping\n\n`llmprototyping` is a Python package designed to provide easy and uniform access to various large language model (LLM) and embedding APIs, along with basic functionality for building small-scale artificial intelligence applications.\n\n## Features\n\n- **Uniform API Access**: Simplify your interactions with different LLM and embedding APIs using a single interface.\n- **Basic AI Application Tools**: Get started quickly with tools designed to support the development of AI applications.\n\n## License\n\nApache License Version 2.0\n\n## Compatibility\n\npython 3.9+\n\n## Installation\n\n```bash\npip install llmprototyping\n```\n\n## Available models\n\n### chat completion models\n\n- groq/llama-3.1-70b-versatile\n- groq/llama-3.1-8b-instant\n- groq/llama3-70b-8192\n- groq/llama3-8b-8192\n- groq/mixtral-8x7b-32768\n- groq/gemma2-9b-it\n- groq/gemma-7b-it\n- openai/gpt-4o-mini\n- openai/gpt-4o\n- openai/gpt-4-turbo\n- openai/gpt-4-turbo-preview\n- openai/gpt-3.5-turbo\n- anthropic/claude-3-opus-20240229\n- anthropic/claude-3-sonnet-20240229\n- anthropic/claude-3-haiku-20240307\n- ollama/*\n\n### embedding models\n\n- openai/text-embedding-3-small\n- openai/text-embedding-3-large\n- openai/text-embedding-ada-002\n- ollama/*\n\n## Usage\n\nNote: The examples use python-dotenv. This is not required by llmprototyping, so it needs to be installed separatedly. \n\n### Simple chat completion call\n\n```python\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\n\ngroq_api_key = os.getenv('GROQ_API_KEY')\n\nimport llmprototyping as llmp\nfactory = llmp.LLMChatCompletionFactory\nmodel = factory.build('groq/llama3-70b-8192', {'api_key': groq_api_key})\nuser_msg = llmp.Message(content=\"Please give me a list of ten colours and some place that is related to each one.\")\nsys_msg = llmp.Message(content=\"Provide an answer in json\", role=\"system\")\nresp = model.query([user_msg,sys_msg], json_response=True, temperature=0)\nresp.show()\n```\n\n<details>\n  <summary>Output</summary>\n\n```\nResponse successful; tokens: i:43 o:145 message:\nMessage role:assistant content:\n{\n\"colours\": [\n{\"colour\": \"Red\", \"place\": \"Rome\"},\n{\"colour\": \"Orange\", \"place\": \"Netherlands\"},\n{\"colour\": \"Yellow\", \"place\": \"Sunshine Coast\"},\n{\"colour\": \"Green\", \"place\": \"Emerald Isle\"},\n{\"colour\": \"Blue\", \"place\": \"Blue Mountains\"},\n{\"colour\": \"Indigo\", \"place\": \"Indigo Bay\"},\n{\"colour\": \"Violet\", \"place\": \"Violet Hill\"},\n{\"colour\": \"Pink\", \"place\": \"Pink Sands Beach\"},\n{\"colour\": \"Brown\", \"place\": \"Brown County\"},\n{\"colour\": \"Grey\", \"place\": \"Greytown\"}\n]\n}\n```\n</details>\n\n### List available models\n\n```python\nimport llmprototyping as llmp\n\nprint('chat completion models:')\nfor model_name in llmp.LLMChatCompletionFactory.available_models:\n    print(f\"  {model_name}\")\n\nprint('embedding models:')\nfor model_name in llmp.EmbeddingComputerFactory.available_models:\n    print(f\"  {model_name}\")\n```\n\n### Embeddings example: search\n\n```python\nknowledge_list = [\n    \"Rome was founded in 753 BCE according to tradition, by Romulus and Remus.\",\n    \"The Roman Republic was established in 509 BCE after overthrowing the last Etruscan kings.\",\n    \"Julius Caesar became the perpetual dictator in 44 BCE, shortly before his assassination.\",\n    \"The Roman Empire officially began when Octavian received the title of Augustus in 27 BCE.\",\n    \"At its peak, the Roman Empire extended from Hispania to Mesopotamia.\",\n    \"The capital of the Empire was moved to Constantinople by Constantine I in 330.\",\n    \"The fall of Rome occurred in 476 CE when the last Western Roman emperor, Romulus Augustulus, was deposed.\",\n    \"Roman culture greatly influenced law, politics, language, and architecture in the Western world.\",\n    \"The expansion of Christianity as the official religion was promoted by Constantine after the Battle of the Milvian Bridge in 312.\",\n    \"Roman society was heavily stratified between patricians, plebeians, and slaves.\"\n]\n\nquestion = \"What is the name of the last emperor?\"\n\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\n\nopenai_api_key = os.getenv('OPENAI_API_KEY')\n\nimport llmprototyping as llmp\n\nimport shelve\ndb = shelve.open('test_embeddings.db')\n\ndef get_embedding(text, computer):\n    if text in db:\n        json = db[text]\n        return llmp.EmbeddingVector.from_json(json)\n\n    print(f'computing embedding for \"{text}\"')\n    em = computer.get_embedding(text)\n    db[text] = em.to_json()\n\n    return em\n\nfactory = llmp.EmbeddingComputerFactory\ncomputer = factory.build('openai/text-embedding-3-small', {'api_key': openai_api_key})        \n\nentry_table = dict()\n\nfor entry_id, entry_text in enumerate(knowledge_list):\n    em = get_embedding(entry_text, computer)\n    entry_table[entry_id] = em\n\nem_question = get_embedding(question, computer)\n\nvdb = llmp.FAISSDatabase(embedding_type=computer.model_name, embedding_size=computer.vector_size)\nvdb.put_records(entry_table)\n\nprint(f\"query: {question}\")\nresults = vdb.search(em_question)\nfor distance, entry_id in results:\n    print(f\"{distance:.3f} {entry_id} {knowledge_list[entry_id]}\")\n```\n\n<details>\n  <summary>Output</summary>\n\n```\ncomputing embedding for \"What is the name of the last emperor?\"\nquery: What is the name of the last emperor?\n1.105 6 The fall of Rome occurred in 476 CE when the last Western Roman emperor, Romulus Augustulus, was deposed.\n1.337 2 Julius Caesar became the perpetual dictator in 44 BCE, shortly before his assassination.\n1.457 3 The Roman Empire officially began when Octavian received the title of Augustus in 27 BCE.\n1.522 5 The capital of the Empire was moved to Constantinople by Constantine I in 330.\n1.559 1 The Roman Republic was established in 509 BCE after overthrowing the last Etruscan kings.\n```\n\nValues for distances may vary depending on the actual embeddings computed.\n</details>\n\n### Ollama example: chat\n\nOLLAMA_HOST is the uri of the host, e.g. http://192.168.1.2:11434\n\nIt requires a call to ollama_discover to register the models in the server\n\nollama_pull can be used to pull a model\n\n```python\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\n\nollama_host = os.getenv('OLLAMA_HOST')\n\nimport llmprototyping as llmp\n\nllmp.ollama_discover(host=ollama_host)\nllmp.ollama_pull_model(host=ollama_host, model_name='phi3')\n\nprint('chat completion models:')\nfor model_name in llmp.LLMChatCompletionFactory.available_models:\n    print(f\"  {model_name}\")\nprint()\n\nfactory = llmp.LLMChatCompletionFactory\nmodel = factory.build('ollama/phi3')\nuser_msg = llmp.Message(content=\"Please give me a list of ten colours and some place that is related to each one.\")\nsys_msg = llmp.Message(content=\"Provide an answer in json\", role=\"system\")\nresp = model.query([user_msg,sys_msg], json_response=True, temperature=0)\n\nresp.show_header()\nprint(resp.message.content)\n```\n\n<details>\n  <summary>Output</summary>\n\n```\nchat completion models:\n  groq/llama3-70b-8192\n  groq/llama3-8b-8192\n  groq/mixtral-8x7b-32768\n  groq/gemma-7b-it\n  openai/gpt-4o\n  openai/gpt-4-turbo\n  openai/gpt-4-turbo-preview\n  openai/gpt-3.5-turbo\n  anthropic/claude-3-opus-20240229\n  anthropic/claude-3-sonnet-20240229\n  anthropic/claude-3-haiku-20240307\n  ollama/phi3:latest\n  ollama/phi3\n\nResponse successful; tokens: i:40 o:189\n{\n  \"Colours\": [\n    {\"Red\": \"The Eiffel Tower, Paris\"},\n    {\"Blue\": \"Pacific Ocean near Hawaii\"},\n    {\"Green\": \"Yellowstone National Park, USA\"},\n    {\"Orange\": \"Sunset at the Grand Canyon, Arizona\"},\n    {\"White\": \"Mt. Everest Base Camp, Nepal\"},\n    {\"Black\": \"The Great Barrier Reef, Australia (night diving)\"},\n    {\"Purple\": \"Royal Palace of Caserta, Italy\"},\n    {\"Gray\": \"Snowy landscapes in the Swiss Alps\"},\n    {\"Brown\": \"Amazon Rainforest, Brazil\"},\n    {\"Yellow\": \"Kilimanjaro's snow-capped peak, Tanzania\"}\n  ]\n}\n```\n</details>\n\n### Templates example\n\nWrite templates.txt file:\n```\n# template: question_yesno_json_sys\n# role: system\n\nAnswer the question with any of these responses: yes, no, unknown, ambiguous.\nRespond in json using this schema:\n{ \"answer\": \"...\" }\n\n# template: question_yesno_user\n# role: user\n\n{{question}}\n\n# template: extract_keywords_json_sys\n# role: system\n\nExtract keywords from the provided text.\nRespond in json using this schema:\n{ \"keywords\": [\"keyword1\", \"keyword2\", ...] }\n```\n\nUse it in code:\n\n```python\nimport llmprototyping as llmp\n\ntemplate_repo = llmp.TemplateFileRepository(\"templates.txt\")\nmsg_sys = template_repo.render_message('question_yesno_json_sys', {})\nmsg_user = template_repo.render_message('question_yesno_user', {\"question\": \"Is 1+1 = 2?\"})\n\n# model is an LLMChatCompletion object, see examples above\nresp = model.query([msg_sys,msg_text], json_response=True)\n```\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A lightweight set of tools to use several llm and embeddings apis",
    "version": "0.1.0.dev6",
    "project_urls": {
        "Homepage": "https://github.com/alejandrolc/llmprototyping"
    },
    "split_keywords": [
        "llm",
        " rag",
        " openai",
        " groq",
        " ollama",
        " anthropic"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b4c94dc798608a63940793f0f8e5b8f8336a68d613a1807b73e6726f7947ccd1",
                "md5": "7b324a2de23bdcfd3e46816ecd0d95b7",
                "sha256": "554213a255fd3ae9d10d44f4a8a412f4eea71f4f5c111fc934dabd113b05d64e"
            },
            "downloads": -1,
            "filename": "llmprototyping-0.1.0.dev6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7b324a2de23bdcfd3e46816ecd0d95b7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4,>=3.9",
            "size": 22236,
            "upload_time": "2024-08-25T18:11:58",
            "upload_time_iso_8601": "2024-08-25T18:11:58.843913Z",
            "url": "https://files.pythonhosted.org/packages/b4/c9/4dc798608a63940793f0f8e5b8f8336a68d613a1807b73e6726f7947ccd1/llmprototyping-0.1.0.dev6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7b5efc5b9f2b4f2c652873210b293965c285c358abae911d2624f2051efd2513",
                "md5": "b22d80e04eaf7780ee7cca02a00dcc45",
                "sha256": "f2b82239bbdf51b53371fc77c6531f1d88f86d446416c2fe617d39d511f9b6aa"
            },
            "downloads": -1,
            "filename": "llmprototyping-0.1.0.dev6.tar.gz",
            "has_sig": false,
            "md5_digest": "b22d80e04eaf7780ee7cca02a00dcc45",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4,>=3.9",
            "size": 19992,
            "upload_time": "2024-08-25T18:12:00",
            "upload_time_iso_8601": "2024-08-25T18:12:00.233882Z",
            "url": "https://files.pythonhosted.org/packages/7b/5e/fc5b9f2b4f2c652873210b293965c285c358abae911d2624f2051efd2513/llmprototyping-0.1.0.dev6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-25 18:12:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "alejandrolc",
    "github_project": "llmprototyping",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "llmprototyping"
}
        
Elapsed time: 0.30384s