think-llm


Namethink-llm JSON
Version 0.0.7 PyPI version JSON
download
home_pageNone
SummaryCreate programs that think, using LLMs.
upload_time2024-12-02 08:25:39
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords ai llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Think

Think is a Python package for creating thinking programs.

It provides simple but powerful primitives for composable and robust
integration of Large Language Models (LLMs) into your Python programs.

Think supports OpenAI, Anthropic, Google (Gemini), Groq as LLM
providers, Ollama for local models, as well as any OpenAI-compatible
LLM API provider.

## Examples

Ask a question:

```python
from asyncio import run

from think import LLM, ask

llm = LLM.from_url("anthropic:///claude-3-haiku-20240307")

async def haiku(topic):
    return await ask(llm, "Write a haiku about {{ topic }}", topic=topic)

print(run(haiku("computers")))
```

Get answers as structured data:

```python
from asyncio import run

from think import LLM, LLMQuery

llm = LLM.from_url("openai:///gpt-4o-mini")

class CityInfo(LLMQuery):
    """
    Give me basic information about {{ city }}.
    """
    name: str
    country: str
    population: int
    latitude: float
    longitude: float

async def city_info(city):
    return await CityInfo.run(llm, city=city)

info = run(city_info("Paris"))
print(f"{info.name} is a city in {info.country} with {info.population} inhabitants.")
```

Integrate AI with custom tools:

```python
from asyncio import run
from datetime import date

from think import LLM, Chat

llm = LLM.from_url("openai:///gpt-4o-mini")

def current_date() -> str:
    """
    Get the current date.

    :returns: current date in YYYY-MM-DD format
    """
    return date.today().isoformat()

async def days_to_xmas() -> str:
    chat = Chat("How many days are left until Christmas?")
    return await llm(chat, tools=[current_date])

print(run(days_to_xmas()))
```

Use vision (with models that support it):

```python
from asyncio import run

from think import LLM, Chat

llm = LLM.from_url("openai:///gpt-4o-mini")

async def describe_image(path):
    image_data = open(path, "rb").read()
    chat = Chat().user("Describe the image in detail", images=[image_data])
    return await llm(chat)

print(run(describe_image("path/to/image.jpg")))
```

Use Pydantic or custom parsers for structured data:

```python
from asyncio import run
from ast import parse

from think import LLM, Chat
from think.parser import CodeBlockParser
from think.prompt import JinjaStringTemplate

llm = LLM.from_url("openai:///gpt-4o-mini")

def parse_python(text):
    # extract code block from the text
    block_parser = CodeBlockParser()
    code = block_parser(text)
    # check if the code is valid Python syntax
    try:
        parse(code)
        return code
    except SyntaxError as err:
        raise ValueError(f"Invalid Python code: {err}") from err

async def generate_python_script(task):
    system = "You always output the requested code in a single Markdown code block"
    prompt = "Write a Python script for the following task: {{ task }}"
    tpl = JinjaStringTemplate()
    chat = Chat(system).user(tpl(prompt, task=task))
    return await llm(chat, parser=parse_python)

print(run(generate_python_script("sort a list of numbers")))
```

For detailed documentation on usage and all available features, please refer to the
code docstrings and the integration tests.

## Quickstart

Install via `pip`:

```bash
pip install think-llm
```

Note that the package name is `think-llm`, *not* `think`.

You'll also need to install the providers you want to use:
`openai`, `anthropic`, `google-generativeai`, `groq`, or `ollama`. You
can install them together with Think via `pip` as well:

```bash
pip install think-llm[openai]
pip install think-llm[anthropic]
pip install think-llm[gemini]
pip install think-llm[groq]
pip install think-llm[ollama]
pip install think-llm[all]  # to install all of them
pip install think-llm[dev]  # if you want to run the tests or modify Think
```

You can set up your LLM credentials via environment variables, for example:

```bash
export OPENAI_API_KEY=<your-openai-key>
export ANTHROPIC_API_KEY=<your-anthropic-key>
...
```

Or pass them directly in the model URL:

```python
from think import LLM

llm = LLM.from_url(f"openai://{YOUR_OPENAI_KEY}@/gpt-4o-mini")
```

In practice, you might want to store the entire model URL in the environment
variable and just call `LLM.from_url(os.environ["LLM_URL"])`.

## Model URL

Think uses a URL-like format to specify the model to use. The format is:

```
provider://[api_key@][host[:port]]/model[?query]
```

- `provider` is the model provider, e.g. `openai` or `anthropic`
- `api-key` is the API key for the model provider (optional if set via environment)
- `server` is the server to use, useful for local LLMs; for OpenAI and Anthropic it
    should be empty to use their default base URL
- `model-name` is the name of the model to use

Examples:
    - `openai:///gpt-3.5-turbo` (API key in environment)
    - `openai://sk-my-openai-key@/gpt-3-5-turbo` (explicit API key)
    - `openai://localhost:1234/v1?model=llama-3.2-8b` (custom server over HTTP)
    - `openai+https://openrouter.ai/api/v1?model=llama-3.2-8b` (custom server, HTTPS)

(Note that if the base URL is provided, the model must be passed as a query parameter.)

Using the URL format allows you to easily switch between different models and providers
without changing your code, or using multiple models in the same program without
hardcoding anything.

## Roadmap

Features and capabilities that are planned for the near future:

- documentation
- support for local LLMs via HuggingFace
- more examples

If you want to help with any of these, please look at the open issues, join the
conversation and submit a PR. Please read the Contributing section below.

## Contributing

Contributions are welcome!

To ensure that your contribution is accepted, please follow these guidelines:

- open an issue to discuss your idea before you start working on it, or if there's
  already an issue for your idea, join the conversation there and explain how you
  plan to implement it
- make sure that your code is well documented (docstrings, type annotations, comments,
  etc.) and tested (test coverage should only go up)
- make sure that your code is formatted and type-checked with `ruff` (default settings)

## Copyright

Copyright (C) 2023-2024. Senko Rasic and Think contributors. You may use and/or distribute
this project under the terms of MIT license. See the LICENSE file for more details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "think-llm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "ai, llm",
    "author": null,
    "author_email": "Senko Rasic <senko@senko.net>",
    "download_url": "https://files.pythonhosted.org/packages/64/9c/88a0a9eca388ec995e7b05241c5025f7c8f1c05f5663cabf21ad944630ff/think_llm-0.0.7.tar.gz",
    "platform": null,
    "description": "# Think\n\nThink is a Python package for creating thinking programs.\n\nIt provides simple but powerful primitives for composable and robust\nintegration of Large Language Models (LLMs) into your Python programs.\n\nThink supports OpenAI, Anthropic, Google (Gemini), Groq as LLM\nproviders, Ollama for local models, as well as any OpenAI-compatible\nLLM API provider.\n\n## Examples\n\nAsk a question:\n\n```python\nfrom asyncio import run\n\nfrom think import LLM, ask\n\nllm = LLM.from_url(\"anthropic:///claude-3-haiku-20240307\")\n\nasync def haiku(topic):\n    return await ask(llm, \"Write a haiku about {{ topic }}\", topic=topic)\n\nprint(run(haiku(\"computers\")))\n```\n\nGet answers as structured data:\n\n```python\nfrom asyncio import run\n\nfrom think import LLM, LLMQuery\n\nllm = LLM.from_url(\"openai:///gpt-4o-mini\")\n\nclass CityInfo(LLMQuery):\n    \"\"\"\n    Give me basic information about {{ city }}.\n    \"\"\"\n    name: str\n    country: str\n    population: int\n    latitude: float\n    longitude: float\n\nasync def city_info(city):\n    return await CityInfo.run(llm, city=city)\n\ninfo = run(city_info(\"Paris\"))\nprint(f\"{info.name} is a city in {info.country} with {info.population} inhabitants.\")\n```\n\nIntegrate AI with custom tools:\n\n```python\nfrom asyncio import run\nfrom datetime import date\n\nfrom think import LLM, Chat\n\nllm = LLM.from_url(\"openai:///gpt-4o-mini\")\n\ndef current_date() -> str:\n    \"\"\"\n    Get the current date.\n\n    :returns: current date in YYYY-MM-DD format\n    \"\"\"\n    return date.today().isoformat()\n\nasync def days_to_xmas() -> str:\n    chat = Chat(\"How many days are left until Christmas?\")\n    return await llm(chat, tools=[current_date])\n\nprint(run(days_to_xmas()))\n```\n\nUse vision (with models that support it):\n\n```python\nfrom asyncio import run\n\nfrom think import LLM, Chat\n\nllm = LLM.from_url(\"openai:///gpt-4o-mini\")\n\nasync def describe_image(path):\n    image_data = open(path, \"rb\").read()\n    chat = Chat().user(\"Describe the image in detail\", images=[image_data])\n    return await llm(chat)\n\nprint(run(describe_image(\"path/to/image.jpg\")))\n```\n\nUse Pydantic or custom parsers for structured data:\n\n```python\nfrom asyncio import run\nfrom ast import parse\n\nfrom think import LLM, Chat\nfrom think.parser import CodeBlockParser\nfrom think.prompt import JinjaStringTemplate\n\nllm = LLM.from_url(\"openai:///gpt-4o-mini\")\n\ndef parse_python(text):\n    # extract code block from the text\n    block_parser = CodeBlockParser()\n    code = block_parser(text)\n    # check if the code is valid Python syntax\n    try:\n        parse(code)\n        return code\n    except SyntaxError as err:\n        raise ValueError(f\"Invalid Python code: {err}\") from err\n\nasync def generate_python_script(task):\n    system = \"You always output the requested code in a single Markdown code block\"\n    prompt = \"Write a Python script for the following task: {{ task }}\"\n    tpl = JinjaStringTemplate()\n    chat = Chat(system).user(tpl(prompt, task=task))\n    return await llm(chat, parser=parse_python)\n\nprint(run(generate_python_script(\"sort a list of numbers\")))\n```\n\nFor detailed documentation on usage and all available features, please refer to the\ncode docstrings and the integration tests.\n\n## Quickstart\n\nInstall via `pip`:\n\n```bash\npip install think-llm\n```\n\nNote that the package name is `think-llm`, *not* `think`.\n\nYou'll also need to install the providers you want to use:\n`openai`, `anthropic`, `google-generativeai`, `groq`, or `ollama`. You\ncan install them together with Think via `pip` as well:\n\n```bash\npip install think-llm[openai]\npip install think-llm[anthropic]\npip install think-llm[gemini]\npip install think-llm[groq]\npip install think-llm[ollama]\npip install think-llm[all]  # to install all of them\npip install think-llm[dev]  # if you want to run the tests or modify Think\n```\n\nYou can set up your LLM credentials via environment variables, for example:\n\n```bash\nexport OPENAI_API_KEY=<your-openai-key>\nexport ANTHROPIC_API_KEY=<your-anthropic-key>\n...\n```\n\nOr pass them directly in the model URL:\n\n```python\nfrom think import LLM\n\nllm = LLM.from_url(f\"openai://{YOUR_OPENAI_KEY}@/gpt-4o-mini\")\n```\n\nIn practice, you might want to store the entire model URL in the environment\nvariable and just call `LLM.from_url(os.environ[\"LLM_URL\"])`.\n\n## Model URL\n\nThink uses a URL-like format to specify the model to use. The format is:\n\n```\nprovider://[api_key@][host[:port]]/model[?query]\n```\n\n- `provider` is the model provider, e.g. `openai` or `anthropic`\n- `api-key` is the API key for the model provider (optional if set via environment)\n- `server` is the server to use, useful for local LLMs; for OpenAI and Anthropic it\n    should be empty to use their default base URL\n- `model-name` is the name of the model to use\n\nExamples:\n    - `openai:///gpt-3.5-turbo` (API key in environment)\n    - `openai://sk-my-openai-key@/gpt-3-5-turbo` (explicit API key)\n    - `openai://localhost:1234/v1?model=llama-3.2-8b` (custom server over HTTP)\n    - `openai+https://openrouter.ai/api/v1?model=llama-3.2-8b` (custom server, HTTPS)\n\n(Note that if the base URL is provided, the model must be passed as a query parameter.)\n\nUsing the URL format allows you to easily switch between different models and providers\nwithout changing your code, or using multiple models in the same program without\nhardcoding anything.\n\n## Roadmap\n\nFeatures and capabilities that are planned for the near future:\n\n- documentation\n- support for local LLMs via HuggingFace\n- more examples\n\nIf you want to help with any of these, please look at the open issues, join the\nconversation and submit a PR. Please read the Contributing section below.\n\n## Contributing\n\nContributions are welcome!\n\nTo ensure that your contribution is accepted, please follow these guidelines:\n\n- open an issue to discuss your idea before you start working on it, or if there's\n  already an issue for your idea, join the conversation there and explain how you\n  plan to implement it\n- make sure that your code is well documented (docstrings, type annotations, comments,\n  etc.) and tested (test coverage should only go up)\n- make sure that your code is formatted and type-checked with `ruff` (default settings)\n\n## Copyright\n\nCopyright (C) 2023-2024. Senko Rasic and Think contributors. You may use and/or distribute\nthis project under the terms of MIT license. See the LICENSE file for more details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Create programs that think, using LLMs.",
    "version": "0.0.7",
    "project_urls": null,
    "split_keywords": [
        "ai",
        " llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f3191738d056b570662a62a7b3d994ad0efc68d47f7c1d5c4eafa1beba4e1c6b",
                "md5": "79c37f100170261e6ca27a6a9d3d1328",
                "sha256": "ea59bab12c607abfd3b07714e8ab84f798d26ba42b78ff708d42a6751e9cabaf"
            },
            "downloads": -1,
            "filename": "think_llm-0.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "79c37f100170261e6ca27a6a9d3d1328",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 29864,
            "upload_time": "2024-12-02T08:25:36",
            "upload_time_iso_8601": "2024-12-02T08:25:36.977546Z",
            "url": "https://files.pythonhosted.org/packages/f3/19/1738d056b570662a62a7b3d994ad0efc68d47f7c1d5c4eafa1beba4e1c6b/think_llm-0.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "649c88a0a9eca388ec995e7b05241c5025f7c8f1c05f5663cabf21ad944630ff",
                "md5": "7a3c4bd8ef7fb54f7d8b067ec06f19eb",
                "sha256": "04b077d9acde21ed82887d29b7277abbad0dfd166210f7d26a6dce4d47ffe2bb"
            },
            "downloads": -1,
            "filename": "think_llm-0.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "7a3c4bd8ef7fb54f7d8b067ec06f19eb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 30259,
            "upload_time": "2024-12-02T08:25:39",
            "upload_time_iso_8601": "2024-12-02T08:25:39.054427Z",
            "url": "https://files.pythonhosted.org/packages/64/9c/88a0a9eca388ec995e7b05241c5025f7c8f1c05f5663cabf21ad944630ff/think_llm-0.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-02 08:25:39",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "think-llm"
}
        
Elapsed time: 0.42166s