llmprototyping


Namellmprototyping JSON
Version 0.1.0.dev4 PyPI version JSON
download
home_pagehttps://github.com/alejandrolc/llmprototyping
SummaryA lightweight set of tools to use several llm and embeddings apis
upload_time2024-05-09 11:09:31
maintainerNone
docs_urlNone
authorAlejandro López Correa
requires_python<4,>=3.9
licenseNone
keywords llm rag openai groq ollama
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llmprototyping

`llmprototyping` is a Python package designed to provide easy and uniform access to various large language model (LLM) and embedding APIs, along with basic functionality for building small-scale artificial intelligence applications.

## Features

- **Uniform API Access**: Simplify your interactions with different LLM and embedding APIs using a single interface.
- **Basic AI Application Tools**: Get started quickly with tools designed to support the development of AI applications.

## License

Apache License Version 2.0

## Compatibility

python 3.9+

## Installation

```bash
pip install llmprototyping
```

## Available models

### chat completion models

- groq/llama3-70b-8192
- groq/llama3-8b-8192
- groq/mixtral-8x7b-32768
- groq/gemma-7b-it
- openai/gpt-4-turbo
- openai/gpt-4-turbo-preview
- openai/gpt-3.5-turbo
- ollama/*

### embedding models

- openai/text-embedding-3-small
- openai/text-embedding-3-large
- openai/text-embedding-ada-002

## Usage

Note: The examples use python-dotenv. This is not required by llmprototyping, so it needs to be installed separatedly. 

### Simple chat completion call

```python
import os
from dotenv import load_dotenv
load_dotenv()

groq_api_key = os.getenv('GROQ_API_KEY')

import llmprototyping as llmp
factory = llmp.LLMChatCompletionFactory
model = factory.build('groq/llama3-70b-8192', {'api_key': groq_api_key})
user_msg = llmp.Message(content="Please give me a list of ten colours and some place that is related to each one.")
sys_msg = llmp.Message(content="Provide an answer in json", role="system")
resp = model.query([user_msg,sys_msg], json_response=True, temperature=0)
resp.show()
```

<details>
  <summary>Output</summary>

```
Response successful; tokens: i:43 o:145 message:
Message role:assistant content:
{
"colours": [
{"colour": "Red", "place": "Rome"},
{"colour": "Orange", "place": "Netherlands"},
{"colour": "Yellow", "place": "Sunshine Coast"},
{"colour": "Green", "place": "Emerald Isle"},
{"colour": "Blue", "place": "Blue Mountains"},
{"colour": "Indigo", "place": "Indigo Bay"},
{"colour": "Violet", "place": "Violet Hill"},
{"colour": "Pink", "place": "Pink Sands Beach"},
{"colour": "Brown", "place": "Brown County"},
{"colour": "Grey", "place": "Greytown"}
]
}
```
</details>

### List available models

```python
import llmprototyping as llmp

print('chat completion models:')
for model_name in llmp.LLMChatCompletionFactory.available_models:
    print(f"  {model_name}")

print('embedding models:')
for model_name in llmp.EmbeddingComputerFactory.available_models:
    print(f"  {model_name}")
```

### Embeddings example: search

```python
knowledge_list = [
    "Rome was founded in 753 BCE according to tradition, by Romulus and Remus.",
    "The Roman Republic was established in 509 BCE after overthrowing the last Etruscan kings.",
    "Julius Caesar became the perpetual dictator in 44 BCE, shortly before his assassination.",
    "The Roman Empire officially began when Octavian received the title of Augustus in 27 BCE.",
    "At its peak, the Roman Empire extended from Hispania to Mesopotamia.",
    "The capital of the Empire was moved to Constantinople by Constantine I in 330.",
    "The fall of Rome occurred in 476 CE when the last Western Roman emperor, Romulus Augustulus, was deposed.",
    "Roman culture greatly influenced law, politics, language, and architecture in the Western world.",
    "The expansion of Christianity as the official religion was promoted by Constantine after the Battle of the Milvian Bridge in 312.",
    "Roman society was heavily stratified between patricians, plebeians, and slaves."
]

question = "What is the name of the last emperor?"

import os
from dotenv import load_dotenv
load_dotenv()

openai_api_key = os.getenv('OPENAI_API_KEY')

import llmprototyping as llmp

import shelve
db = shelve.open('test_embeddings.db')

def get_embedding(text, computer):
    if text in db:
        json = db[text]
        return llmp.EmbeddingVector.from_json(json)

    print(f'computing embedding for "{text}"')
    em = computer.get_embedding(text)
    db[text] = em.to_json()

    return em

factory = llmp.EmbeddingComputerFactory
computer = factory.build('openai/text-embedding-3-small', {'api_key': openai_api_key})        

entry_table = dict()

for entry_id, entry_text in enumerate(knowledge_list):
    em = get_embedding(entry_text, computer)
    entry_table[entry_id] = em

em_question = get_embedding(question, computer)

vdb = llmp.FAISSDatabase(embedding_type=computer.model_name, embedding_size=computer.vector_size)
vdb.put_records(entry_table)

print(f"query: {question}")
results = vdb.search(em_question)
for distance, entry_id in results:
    print(f"{distance:.3f} {entry_id} {knowledge_list[entry_id]}")
```

<details>
  <summary>Output</summary>

```
computing embedding for "What is the name of the last emperor?"
query: What is the name of the last emperor?
1.105 6 The fall of Rome occurred in 476 CE when the last Western Roman emperor, Romulus Augustulus, was deposed.
1.337 2 Julius Caesar became the perpetual dictator in 44 BCE, shortly before his assassination.
1.457 3 The Roman Empire officially began when Octavian received the title of Augustus in 27 BCE.
1.522 5 The capital of the Empire was moved to Constantinople by Constantine I in 330.
1.559 1 The Roman Republic was established in 509 BCE after overthrowing the last Etruscan kings.
```

Values for distances may vary depending on the actual embeddings computed.
</details>

### Ollama example: chat

OLLAMA_HOST is the uri of the host, e.g. http://192.168.1.2:11434

It requires a call to ollama_discover to register the models in the server

ollama_pull can be used to pull a model

```python
import os
from dotenv import load_dotenv
load_dotenv()

ollama_host = os.getenv('OLLAMA_HOST')

import llmprototyping as llmp

llmp.ollama_discover(host=ollama_host)
llmp.ollama_pull_model(host=ollama_host, model_name='phi3')

print('chat completion models:')
for model_name in llmp.LLMChatCompletionFactory.available_models:
    print(f"  {model_name}")
print()

factory = llmp.LLMChatCompletionFactory
model = factory.build('ollama/phi3:latest')
user_msg = llmp.Message(content="Please give me a list of ten colours and some place that is related to each one.")
sys_msg = llmp.Message(content="Provide an answer in json", role="system")
resp = model.query([user_msg,sys_msg], json_response=True, temperature=0)

resp.show_header()
print(resp.message.content)
```

<details>
  <summary>Output</summary>

```
chat completion models:
  groq/llama3-70b-8192
  groq/llama3-8b-8192
  groq/mixtral-8x7b-32768
  groq/gemma-7b-it
  openai/gpt-4-turbo
  openai/gpt-4-turbo-preview
  openai/gpt-3.5-turbo
  ollama/phi3:latest

Response successful; tokens: i:40 o:189
{
  "Colours": [
    {"Red": "The Eiffel Tower, Paris"},
    {"Blue": "Pacific Ocean near Hawaii"},
    {"Green": "Yellowstone National Park, USA"},
    {"Orange": "Sunset at the Grand Canyon, Arizona"},
    {"White": "Mt. Everest Base Camp, Nepal"},
    {"Black": "The Great Barrier Reef, Australia (night diving)"},
    {"Purple": "Royal Palace of Caserta, Italy"},
    {"Gray": "Snowy landscapes in the Swiss Alps"},
    {"Brown": "Amazon Rainforest, Brazil"},
    {"Yellow": "Kilimanjaro's snow-capped peak, Tanzania"}
  ]
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/alejandrolc/llmprototyping",
    "name": "llmprototyping",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4,>=3.9",
    "maintainer_email": null,
    "keywords": "llm, rag, openai, groq, ollama",
    "author": "Alejandro L\u00f3pez Correa",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ba/83/41b6e38efa5f09ec24c3cee6c0b15954cd653c6f6229080b41d55ffb857a/llmprototyping-0.1.0.dev4.tar.gz",
    "platform": null,
    "description": "# llmprototyping\n\n`llmprototyping` is a Python package designed to provide easy and uniform access to various large language model (LLM) and embedding APIs, along with basic functionality for building small-scale artificial intelligence applications.\n\n## Features\n\n- **Uniform API Access**: Simplify your interactions with different LLM and embedding APIs using a single interface.\n- **Basic AI Application Tools**: Get started quickly with tools designed to support the development of AI applications.\n\n## License\n\nApache License Version 2.0\n\n## Compatibility\n\npython 3.9+\n\n## Installation\n\n```bash\npip install llmprototyping\n```\n\n## Available models\n\n### chat completion models\n\n- groq/llama3-70b-8192\n- groq/llama3-8b-8192\n- groq/mixtral-8x7b-32768\n- groq/gemma-7b-it\n- openai/gpt-4-turbo\n- openai/gpt-4-turbo-preview\n- openai/gpt-3.5-turbo\n- ollama/*\n\n### embedding models\n\n- openai/text-embedding-3-small\n- openai/text-embedding-3-large\n- openai/text-embedding-ada-002\n\n## Usage\n\nNote: The examples use python-dotenv. This is not required by llmprototyping, so it needs to be installed separatedly. \n\n### Simple chat completion call\n\n```python\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\n\ngroq_api_key = os.getenv('GROQ_API_KEY')\n\nimport llmprototyping as llmp\nfactory = llmp.LLMChatCompletionFactory\nmodel = factory.build('groq/llama3-70b-8192', {'api_key': groq_api_key})\nuser_msg = llmp.Message(content=\"Please give me a list of ten colours and some place that is related to each one.\")\nsys_msg = llmp.Message(content=\"Provide an answer in json\", role=\"system\")\nresp = model.query([user_msg,sys_msg], json_response=True, temperature=0)\nresp.show()\n```\n\n<details>\n  <summary>Output</summary>\n\n```\nResponse successful; tokens: i:43 o:145 message:\nMessage role:assistant content:\n{\n\"colours\": [\n{\"colour\": \"Red\", \"place\": \"Rome\"},\n{\"colour\": \"Orange\", \"place\": \"Netherlands\"},\n{\"colour\": \"Yellow\", \"place\": \"Sunshine Coast\"},\n{\"colour\": \"Green\", \"place\": \"Emerald Isle\"},\n{\"colour\": \"Blue\", \"place\": \"Blue Mountains\"},\n{\"colour\": \"Indigo\", \"place\": \"Indigo Bay\"},\n{\"colour\": \"Violet\", \"place\": \"Violet Hill\"},\n{\"colour\": \"Pink\", \"place\": \"Pink Sands Beach\"},\n{\"colour\": \"Brown\", \"place\": \"Brown County\"},\n{\"colour\": \"Grey\", \"place\": \"Greytown\"}\n]\n}\n```\n</details>\n\n### List available models\n\n```python\nimport llmprototyping as llmp\n\nprint('chat completion models:')\nfor model_name in llmp.LLMChatCompletionFactory.available_models:\n    print(f\"  {model_name}\")\n\nprint('embedding models:')\nfor model_name in llmp.EmbeddingComputerFactory.available_models:\n    print(f\"  {model_name}\")\n```\n\n### Embeddings example: search\n\n```python\nknowledge_list = [\n    \"Rome was founded in 753 BCE according to tradition, by Romulus and Remus.\",\n    \"The Roman Republic was established in 509 BCE after overthrowing the last Etruscan kings.\",\n    \"Julius Caesar became the perpetual dictator in 44 BCE, shortly before his assassination.\",\n    \"The Roman Empire officially began when Octavian received the title of Augustus in 27 BCE.\",\n    \"At its peak, the Roman Empire extended from Hispania to Mesopotamia.\",\n    \"The capital of the Empire was moved to Constantinople by Constantine I in 330.\",\n    \"The fall of Rome occurred in 476 CE when the last Western Roman emperor, Romulus Augustulus, was deposed.\",\n    \"Roman culture greatly influenced law, politics, language, and architecture in the Western world.\",\n    \"The expansion of Christianity as the official religion was promoted by Constantine after the Battle of the Milvian Bridge in 312.\",\n    \"Roman society was heavily stratified between patricians, plebeians, and slaves.\"\n]\n\nquestion = \"What is the name of the last emperor?\"\n\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\n\nopenai_api_key = os.getenv('OPENAI_API_KEY')\n\nimport llmprototyping as llmp\n\nimport shelve\ndb = shelve.open('test_embeddings.db')\n\ndef get_embedding(text, computer):\n    if text in db:\n        json = db[text]\n        return llmp.EmbeddingVector.from_json(json)\n\n    print(f'computing embedding for \"{text}\"')\n    em = computer.get_embedding(text)\n    db[text] = em.to_json()\n\n    return em\n\nfactory = llmp.EmbeddingComputerFactory\ncomputer = factory.build('openai/text-embedding-3-small', {'api_key': openai_api_key})        \n\nentry_table = dict()\n\nfor entry_id, entry_text in enumerate(knowledge_list):\n    em = get_embedding(entry_text, computer)\n    entry_table[entry_id] = em\n\nem_question = get_embedding(question, computer)\n\nvdb = llmp.FAISSDatabase(embedding_type=computer.model_name, embedding_size=computer.vector_size)\nvdb.put_records(entry_table)\n\nprint(f\"query: {question}\")\nresults = vdb.search(em_question)\nfor distance, entry_id in results:\n    print(f\"{distance:.3f} {entry_id} {knowledge_list[entry_id]}\")\n```\n\n<details>\n  <summary>Output</summary>\n\n```\ncomputing embedding for \"What is the name of the last emperor?\"\nquery: What is the name of the last emperor?\n1.105 6 The fall of Rome occurred in 476 CE when the last Western Roman emperor, Romulus Augustulus, was deposed.\n1.337 2 Julius Caesar became the perpetual dictator in 44 BCE, shortly before his assassination.\n1.457 3 The Roman Empire officially began when Octavian received the title of Augustus in 27 BCE.\n1.522 5 The capital of the Empire was moved to Constantinople by Constantine I in 330.\n1.559 1 The Roman Republic was established in 509 BCE after overthrowing the last Etruscan kings.\n```\n\nValues for distances may vary depending on the actual embeddings computed.\n</details>\n\n### Ollama example: chat\n\nOLLAMA_HOST is the uri of the host, e.g. http://192.168.1.2:11434\n\nIt requires a call to ollama_discover to register the models in the server\n\nollama_pull can be used to pull a model\n\n```python\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\n\nollama_host = os.getenv('OLLAMA_HOST')\n\nimport llmprototyping as llmp\n\nllmp.ollama_discover(host=ollama_host)\nllmp.ollama_pull_model(host=ollama_host, model_name='phi3')\n\nprint('chat completion models:')\nfor model_name in llmp.LLMChatCompletionFactory.available_models:\n    print(f\"  {model_name}\")\nprint()\n\nfactory = llmp.LLMChatCompletionFactory\nmodel = factory.build('ollama/phi3:latest')\nuser_msg = llmp.Message(content=\"Please give me a list of ten colours and some place that is related to each one.\")\nsys_msg = llmp.Message(content=\"Provide an answer in json\", role=\"system\")\nresp = model.query([user_msg,sys_msg], json_response=True, temperature=0)\n\nresp.show_header()\nprint(resp.message.content)\n```\n\n<details>\n  <summary>Output</summary>\n\n```\nchat completion models:\n  groq/llama3-70b-8192\n  groq/llama3-8b-8192\n  groq/mixtral-8x7b-32768\n  groq/gemma-7b-it\n  openai/gpt-4-turbo\n  openai/gpt-4-turbo-preview\n  openai/gpt-3.5-turbo\n  ollama/phi3:latest\n\nResponse successful; tokens: i:40 o:189\n{\n  \"Colours\": [\n    {\"Red\": \"The Eiffel Tower, Paris\"},\n    {\"Blue\": \"Pacific Ocean near Hawaii\"},\n    {\"Green\": \"Yellowstone National Park, USA\"},\n    {\"Orange\": \"Sunset at the Grand Canyon, Arizona\"},\n    {\"White\": \"Mt. Everest Base Camp, Nepal\"},\n    {\"Black\": \"The Great Barrier Reef, Australia (night diving)\"},\n    {\"Purple\": \"Royal Palace of Caserta, Italy\"},\n    {\"Gray\": \"Snowy landscapes in the Swiss Alps\"},\n    {\"Brown\": \"Amazon Rainforest, Brazil\"},\n    {\"Yellow\": \"Kilimanjaro's snow-capped peak, Tanzania\"}\n  ]\n}\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A lightweight set of tools to use several llm and embeddings apis",
    "version": "0.1.0.dev4",
    "project_urls": {
        "Homepage": "https://github.com/alejandrolc/llmprototyping"
    },
    "split_keywords": [
        "llm",
        " rag",
        " openai",
        " groq",
        " ollama"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "431499b27200875a3055d13d10e2f4629596c8260b02c64950abea592c30c8e5",
                "md5": "77844ddc1e0f83ad6fa1f6e41b83677b",
                "sha256": "1fd3ad2b323189598d4a65077a5c3a871db0796f8255da767d45ee379e11ffa1"
            },
            "downloads": -1,
            "filename": "llmprototyping-0.1.0.dev4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "77844ddc1e0f83ad6fa1f6e41b83677b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4,>=3.9",
            "size": 18227,
            "upload_time": "2024-05-09T11:09:30",
            "upload_time_iso_8601": "2024-05-09T11:09:30.133488Z",
            "url": "https://files.pythonhosted.org/packages/43/14/99b27200875a3055d13d10e2f4629596c8260b02c64950abea592c30c8e5/llmprototyping-0.1.0.dev4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ba8341b6e38efa5f09ec24c3cee6c0b15954cd653c6f6229080b41d55ffb857a",
                "md5": "5a9893498e81877d51930a279107a848",
                "sha256": "3fa0c2e85590230f554964cc5d7ca1f43f6bee683d9a492044f616397836a164"
            },
            "downloads": -1,
            "filename": "llmprototyping-0.1.0.dev4.tar.gz",
            "has_sig": false,
            "md5_digest": "5a9893498e81877d51930a279107a848",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4,>=3.9",
            "size": 16768,
            "upload_time": "2024-05-09T11:09:31",
            "upload_time_iso_8601": "2024-05-09T11:09:31.858839Z",
            "url": "https://files.pythonhosted.org/packages/ba/83/41b6e38efa5f09ec24c3cee6c0b15954cd653c6f6229080b41d55ffb857a/llmprototyping-0.1.0.dev4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-09 11:09:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "alejandrolc",
    "github_project": "llmprototyping",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "llmprototyping"
}
        
Elapsed time: 0.24613s