modelsmith


Namemodelsmith JSON
Version 0.6.1 PyPI version JSON
download
home_pagehttps://github.com/christo-olivier/modelsmith
SummaryGet Pydantic models and Python types as LLM responses from Google Vertex AI and OpenAI models.
upload_time2024-07-09 12:24:22
maintainerChristo Olivier
docs_urlNone
authorChristo Olivier
requires_python<4.0,>=3.10
licenseMIT
keywords vertexai pydantic models types llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="modelsmith.png" style="width: auto; height: auto;"/>
</p>

# Modelsmith
### Modelsmith is a Python library that allows you to get structured responses in the form of Pydantic models and Python types from Google Vertex AI and OpenAI models.

Currently it allows you to use the following classes of model:
- __AnthropicModel__ (used with Anthropic's set of models such as `claude-3-haiku`, `claude-3-sonnet`, `claude-3-opus` and `claude-3_5-sonnet`)
- __OpenAIModel__ (used with OpenAI's set of models such as `gpt-3.5-turbo`, `gpt-4` and `gpt-4o`)
- __VertexAIChatModel__ (used with Google Vertex AI's chat models such as `chat-bison`)
- __VertexAITextGenerationModel__ (used with Google Vertex AI's text generation models such as `text-bison`)
- __VertexAIGenerativeModel__ (used with Google Vertex AI's generative models such as `gemini-pro`)

Modelsmith provides a unified interface over all of these. It has been designed to be extensible and can adapt to other models in the future.

# Notable Features

- __Structured Responses__: Specify both Pydantic models and Python types as the outputs of your LLM responses.
- __Templating__: Use Jinja2 templating in your prompts to allowing complex prompt logic.
- __Default and Custom Prompts__: A default prompt template is provided but you can also specify your own.
- __Retry Logic__: Number of retries is user configurable.
- __Validation__: Outputs from the LLM are validated against your requested response model. Errors are fed back to the LLM to try and correct any validation failures.

# Installation

Install Modelsmith using pip or your favourite python package manager.

`pip` example:
```bash
pip install modelsmith
```

## Anthropic Authentication

Authentication to Anthropic is done via the Anthropic flow. See the [Anthropic documentation](https://docs.anthropic.com/en/docs/quickstart#set-your-api-key) for more details. 

The `AnthropicModel` class takes an optional `api_key` parameter. If not provided, the `ANTHROPIC_API_KEY` environment variable will be used.

## Google Cloud Authentication

Authentication to Google Cloud is done via the Application Default Credentials flow. So make sure you have ADC configured. See [Google's documentation](https://cloud.google.com/docs/authentication/provide-credentials-adc) for more details.

## Open AI Authentication
Authentication to OpenAI is done via the OpenAI flow. See the [OpenAI documentation](https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key) for more details.

The `OpenAIModel` allows you to pass the `api_key`, `organization` and `project` when you initialize the class instance. If you do not pass this in it will be inferred from the environment variables `OPENAI_API_KEY`, `OPENAI_ORG_ID` and `OPENAI_PROJECT_ID` as per the OpenAI documentation.

# Getting started

## NB! API changes in new release

The API has changed in release 0.5.0. In this release you do not pass Vertex AI models directly from the `vertexai` python package. 
instead you use the wrapper classes defined in the `modelsmith.language_models` module.

For convenience the new model wrapper classes can be imported directly from the `modelsmith` package without needing to reference the `language_models` module.

The old style API will still be supported in release 0.5.0 but will be deprecated after this release.

## Extracting a Pydantic model

Lets look at an example of extracting a Pydantic model from some text.

```python
from modelsmith import Forge, OpenAIModel
from pydantic import BaseModel, Field


# Define the pydantic model you want to receive as the response
class User(BaseModel):
    name: str = Field(description="The person's name")
    age: int = Field(description="The person's age")
    city: str = Field(description="The city where the person lives")
    country: str = Field(description="The country where the person lives")


# Create your forge instance
forge = Forge(model=OpenAIModel("gpt-3.5-turbo"), response_model=User)

# Generate a User instance from the prompt
user = forge.generate("Terry Tate 60. Lives in Irvine, United States.")

print(user)  # name='Terry Tate' age=60 city='Irvine' country='United States'
```

## Extracting a combined Pydantic and Python type

Modelsmith does not restrict you to either Pydantic models or Python types. You can combine them in the same response. Below we extract a list of Pydantic model instances.

```python
from modelsmith import Forge, VertexAIGenerativeModel
from pydantic import BaseModel, Field


class City(BaseModel):
    city: str = Field(description="The name of the city")
    state: str = Field(description="2-letter abbreviation of the state")


# Pass a list of Pydantic models to the response_model argument.
forge = Forge(
    model=VertexAIGenerativeModel("gemini-1.5-pro"),
    response_model=list[City],
)

response = forge.generate("I have lived in Irvine, CA and Dallas TX")

print(response)  # [City(city='Irvine', state='CA'), City(city='Dallas', state='TX')]
```

## Using different model types

Using a different model is as simple as passing the desired model class to the Forge. Taking the example above lets use `text-bison` instead of `gemini-pro`.

```python
from modelsmith import Forge, VertexAITextGenerationModel  # import the correct class
from pydantic import BaseModel, Field


class City(BaseModel):
    city: str = Field(description="The name of the city")
    state: str = Field(description="2-letter abbreviation of the state")


# text-bison instead of gemini-pro
forge = Forge(
    model=VertexAITextGenerationModel("text-bison"),
    response_model=list[City],
)

response = forge.generate("I have lived in Irvine, CA and Dallas TX")

print(response)  # [City(city='Irvine', state='CA'), City(city='Dallas', state='TX')]
```

If we want to use an Anthropic model the same applies. Simply select the appropriate model class, specify which Anthropic model to use (in this case `claude-3-haiku-20240307`), and pass it to the `Forge` instance.

```python
from modelsmith import Forge, AnthropicModel  # import the correct class
from pydantic import BaseModel, Field


class City(BaseModel):
    city: str = Field(description="The name of the city")
    state: str = Field(description="2-letter abbreviation of the state")


# Anthropic's claude-3-haiku-20240307 instead of gemini-pro
forge = Forge(
    model=AnthropicModel("claude-3-haiku-20240307"),
    response_model=list[City],
)

response = forge.generate("I have lived in Irvine, CA and Dallas TX")

print(response)  # [City(city='Irvine', state='CA'), City(city='Dallas', state='TX')]
```

## Using the default prompt template

The previous examples use the built in prompt template in zero-shot mode. The default template also works in few-shot mode and allows you to pass in examples via the `prompt_values` parameter of the `generate` method. The default prompt template has a template variable called `examples` that we pass our example text to. The following example shows how this can be used.

```python
import inspect

from modelsmith import Forge, VertexAIGenerativeModel

# Create your forge instance
forge = Forge(
    model=VertexAIGenerativeModel("gemini-1.5-flash"), response_model=list[str]
)

# Define examples, using inspect.cleandoc to remove indentation
examples = inspect.cleandoc("""
    input: John Doe is forty years old. Lives in Alton, England
    output: ["John Doe", "40", "Alton", "England"]

    input: Sarah Green lives in London, UK. She is 32 years old.
    output: ["Sarah Green", "32", "London", "UK"]
""")

# Generate a Python list of string values from the input text
response = forge.generate(
    "Sophia Schmidt twenty three. Resident in Berlin Germany.",
    prompt_values={"examples": examples},
)

print(response)  # ['Sophia Schmidt', '23', 'Berlin', 'Germany']
```

## Using your own prompt template

If you want to use your own prompt you can simply pass it to the `prompt` parameter of the `Forge` class. Any jinja2 template variables will be replaced with the values provided in the `prompt_values` parameter of the `generate` method.

⚠️ If using your own prompt include a jinja template variable called `response_model_json` to place your response model json schema in your preferred location. If `response_model_json` is not provided then the default response model template text will be appended to the end of your prompt.

Here is an example of using a custom prompt that includes the `response_model_json` template variable.

```python
import inspect

from modelsmith import Forge, OpenAIModel

# Create your custom prompt
my_prompt = inspect.cleandoc("""
    You are extracting city names from user provided text. You are only to extract
    city names and you should ignore country names or any other entities that are not
    cities.

    You MUST take the types of the OUTPUT SCHEMA into account and adjust your
    provided text to fit the required types.

    Here is the OUTPUT SCHEMA:
    {{ response_model_json }}
""")

# Create your forge instance, passing your prompt
forge = Forge(
    model=OpenAIModel("gpt-4o"),
    response_model=list,
    prompt=my_prompt,
)

# Generate a your response
response = forge.generate(
    "Berlin is the capital of Germany. London is the capital of England."
)

print(response)  # ['Berlin', 'London']
```

The same example above would also work if the `response_model_json` was left out of the prompt due to this being added automatically if missing.

```python
import inspect

from modelsmith import Forge, VertexAITextGenerationModel

# Create your custom prompt
my_prompt = inspect.cleandoc("""
    You are extracting city names from user provided text. You are only to extract
    city names and you should ignore country names or any other entities that are not
    cities.
""")

# Create your forge instance, passing your prompt
forge = Forge(
    model=VertexAITextGenerationModel("text-bison"),
    response_model=list,
    prompt=my_prompt,
)

# Generate a your response
response = forge.generate(
    "Berlin is the capital of Germany. London is the capital of England."
)

print(response)  # ['Berlin', 'London']
```

## Placing user_input inside your prompt

By default user input is appended to the end of both custom and default prompts. Modelsmith allows you to place user input anywhere inside your custom prompt by adding the template variable `{{ user_input }}` where you want the user input to go.

```python
# Create your custom prompt with user input placed at the beginning
my_prompt = inspect.cleandoc("""
    Consider the following user input: {{ user_input }}

    You are extracting numbers from user input and combing them into one number. 
    Take into account numbers written as text as well as in numerical format.
""")
```

## Setting the number of retries

By default Modelsmith will try to get the desired response model from the LLM three times before raising an exception. On each retry the validation error is fed back to the LLM with a request to correct it. 

You can change this by passing the `max_retries` parameter to the `Forge` class.

```python
# Create your forge instance, setting the number of retries
forge = Forge(
    model=VertexAIGenerativeModel("gemini-1.0-pro"), response_model=int, max_retries=2
)
```

## Matching patterns

Modelsmith looks for JSON output in the LLM response. It uses regular expressions to identify JSON output. If for any reason you want to use a different pattern you can pass it to the `match_pattern` parameter of the `Forge` class.

## Failing silently

Modelsmith will raise a `ModelNotDerivedError` exception if no valid response was obtained. You can change this by passing `False` to the `raise_on_failure` parameter of the `Forge` class.

This will suppress the exception and return `None` instead.

## Passing prompt template variables and model settings

You can pass prompt template variables and model settings by passing them to the `prompt_values` and `model_settings` parameters of the `generate` method.


```python
import inspect

from modelsmith import Forge, OpenAIModel

# Create your custom prompt
my_prompt = inspect.cleandoc("""
    You are extracting city names from user provided text. You are only to extract
    city names and you should ignore country names or any other entities that are not
    cities.

    {{ user_input_prefix }}
    {{ user_input }}
""")

# Create your forge instance, passing your prompt
forge = Forge(
    model=OpenAIModel("gpt-4o"),
    response_model=list,
    prompt=my_prompt,
    max_retries=2,
)

# Custom LLM settings
model_settings = {
    "temperature": 0.8,
    "top_p": 1.0,
}

# Prompt template variable values to pass
prompt_values = {
    "user_input_prefix": "I have a the following text to analyze: ",
}

# Generate a your response
response = forge.generate(
    "Berlin is the capital of Germany. London is the capital of England.",
    prompt_values=prompt_values,
    model_settings=model_settings,
)

print(response)  # ['Berlin', 'London']
```

## Learn more

Have a look at the tests included in this repository for more examples.

# Get in touch

If you have any questions or suggestions, feel free to open an issue or start a discussion.

# License

This project is licensed under the terms of the MIT License.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/christo-olivier/modelsmith",
    "name": "modelsmith",
    "maintainer": "Christo Olivier",
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": "mail@christoolivier.com",
    "keywords": "vertexai, pydantic, models, types, llm",
    "author": "Christo Olivier",
    "author_email": "mail@christoolivier.com",
    "download_url": "https://files.pythonhosted.org/packages/41/8c/9c605e2dcad0741f0ff3e53a37431442d1c61a8496295ecc3dc1abd0e6d9/modelsmith-0.6.1.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img src=\"modelsmith.png\" style=\"width: auto; height: auto;\"/>\n</p>\n\n# Modelsmith\n### Modelsmith is a Python library that allows you to get structured responses in the form of Pydantic models and Python types from Google Vertex AI and OpenAI models.\n\nCurrently it allows you to use the following classes of model:\n- __AnthropicModel__ (used with Anthropic's set of models such as `claude-3-haiku`, `claude-3-sonnet`, `claude-3-opus` and `claude-3_5-sonnet`)\n- __OpenAIModel__ (used with OpenAI's set of models such as `gpt-3.5-turbo`, `gpt-4` and `gpt-4o`)\n- __VertexAIChatModel__ (used with Google Vertex AI's chat models such as `chat-bison`)\n- __VertexAITextGenerationModel__ (used with Google Vertex AI's text generation models such as `text-bison`)\n- __VertexAIGenerativeModel__ (used with Google Vertex AI's generative models such as `gemini-pro`)\n\nModelsmith provides a unified interface over all of these. It has been designed to be extensible and can adapt to other models in the future.\n\n# Notable Features\n\n- __Structured Responses__: Specify both Pydantic models and Python types as the outputs of your LLM responses.\n- __Templating__: Use Jinja2 templating in your prompts to allowing complex prompt logic.\n- __Default and Custom Prompts__: A default prompt template is provided but you can also specify your own.\n- __Retry Logic__: Number of retries is user configurable.\n- __Validation__: Outputs from the LLM are validated against your requested response model. Errors are fed back to the LLM to try and correct any validation failures.\n\n# Installation\n\nInstall Modelsmith using pip or your favourite python package manager.\n\n`pip` example:\n```bash\npip install modelsmith\n```\n\n## Anthropic Authentication\n\nAuthentication to Anthropic is done via the Anthropic flow. See the [Anthropic documentation](https://docs.anthropic.com/en/docs/quickstart#set-your-api-key) for more details. \n\nThe `AnthropicModel` class takes an optional `api_key` parameter. If not provided, the `ANTHROPIC_API_KEY` environment variable will be used.\n\n## Google Cloud Authentication\n\nAuthentication to Google Cloud is done via the Application Default Credentials flow. So make sure you have ADC configured. See [Google's documentation](https://cloud.google.com/docs/authentication/provide-credentials-adc) for more details.\n\n## Open AI Authentication\nAuthentication to OpenAI is done via the OpenAI flow. See the [OpenAI documentation](https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key) for more details.\n\nThe `OpenAIModel` allows you to pass the `api_key`, `organization` and `project` when you initialize the class instance. If you do not pass this in it will be inferred from the environment variables `OPENAI_API_KEY`, `OPENAI_ORG_ID` and `OPENAI_PROJECT_ID` as per the OpenAI documentation.\n\n# Getting started\n\n## NB! API changes in new release\n\nThe API has changed in release 0.5.0. In this release you do not pass Vertex AI models directly from the `vertexai` python package. \ninstead you use the wrapper classes defined in the `modelsmith.language_models` module.\n\nFor convenience the new model wrapper classes can be imported directly from the `modelsmith` package without needing to reference the `language_models` module.\n\nThe old style API will still be supported in release 0.5.0 but will be deprecated after this release.\n\n## Extracting a Pydantic model\n\nLets look at an example of extracting a Pydantic model from some text.\n\n```python\nfrom modelsmith import Forge, OpenAIModel\nfrom pydantic import BaseModel, Field\n\n\n# Define the pydantic model you want to receive as the response\nclass User(BaseModel):\n    name: str = Field(description=\"The person's name\")\n    age: int = Field(description=\"The person's age\")\n    city: str = Field(description=\"The city where the person lives\")\n    country: str = Field(description=\"The country where the person lives\")\n\n\n# Create your forge instance\nforge = Forge(model=OpenAIModel(\"gpt-3.5-turbo\"), response_model=User)\n\n# Generate a User instance from the prompt\nuser = forge.generate(\"Terry Tate 60. Lives in Irvine, United States.\")\n\nprint(user)  # name='Terry Tate' age=60 city='Irvine' country='United States'\n```\n\n## Extracting a combined Pydantic and Python type\n\nModelsmith does not restrict you to either Pydantic models or Python types. You can combine them in the same response. Below we extract a list of Pydantic model instances.\n\n```python\nfrom modelsmith import Forge, VertexAIGenerativeModel\nfrom pydantic import BaseModel, Field\n\n\nclass City(BaseModel):\n    city: str = Field(description=\"The name of the city\")\n    state: str = Field(description=\"2-letter abbreviation of the state\")\n\n\n# Pass a list of Pydantic models to the response_model argument.\nforge = Forge(\n    model=VertexAIGenerativeModel(\"gemini-1.5-pro\"),\n    response_model=list[City],\n)\n\nresponse = forge.generate(\"I have lived in Irvine, CA and Dallas TX\")\n\nprint(response)  # [City(city='Irvine', state='CA'), City(city='Dallas', state='TX')]\n```\n\n## Using different model types\n\nUsing a different model is as simple as passing the desired model class to the Forge. Taking the example above lets use `text-bison` instead of `gemini-pro`.\n\n```python\nfrom modelsmith import Forge, VertexAITextGenerationModel  # import the correct class\nfrom pydantic import BaseModel, Field\n\n\nclass City(BaseModel):\n    city: str = Field(description=\"The name of the city\")\n    state: str = Field(description=\"2-letter abbreviation of the state\")\n\n\n# text-bison instead of gemini-pro\nforge = Forge(\n    model=VertexAITextGenerationModel(\"text-bison\"),\n    response_model=list[City],\n)\n\nresponse = forge.generate(\"I have lived in Irvine, CA and Dallas TX\")\n\nprint(response)  # [City(city='Irvine', state='CA'), City(city='Dallas', state='TX')]\n```\n\nIf we want to use an Anthropic model the same applies. Simply select the appropriate model class, specify which Anthropic model to use (in this case `claude-3-haiku-20240307`), and pass it to the `Forge` instance.\n\n```python\nfrom modelsmith import Forge, AnthropicModel  # import the correct class\nfrom pydantic import BaseModel, Field\n\n\nclass City(BaseModel):\n    city: str = Field(description=\"The name of the city\")\n    state: str = Field(description=\"2-letter abbreviation of the state\")\n\n\n# Anthropic's claude-3-haiku-20240307 instead of gemini-pro\nforge = Forge(\n    model=AnthropicModel(\"claude-3-haiku-20240307\"),\n    response_model=list[City],\n)\n\nresponse = forge.generate(\"I have lived in Irvine, CA and Dallas TX\")\n\nprint(response)  # [City(city='Irvine', state='CA'), City(city='Dallas', state='TX')]\n```\n\n## Using the default prompt template\n\nThe previous examples use the built in prompt template in zero-shot mode. The default template also works in few-shot mode and allows you to pass in examples via the `prompt_values` parameter of the `generate` method. The default prompt template has a template variable called `examples` that we pass our example text to. The following example shows how this can be used.\n\n```python\nimport inspect\n\nfrom modelsmith import Forge, VertexAIGenerativeModel\n\n# Create your forge instance\nforge = Forge(\n    model=VertexAIGenerativeModel(\"gemini-1.5-flash\"), response_model=list[str]\n)\n\n# Define examples, using inspect.cleandoc to remove indentation\nexamples = inspect.cleandoc(\"\"\"\n    input: John Doe is forty years old. Lives in Alton, England\n    output: [\"John Doe\", \"40\", \"Alton\", \"England\"]\n\n    input: Sarah Green lives in London, UK. She is 32 years old.\n    output: [\"Sarah Green\", \"32\", \"London\", \"UK\"]\n\"\"\")\n\n# Generate a Python list of string values from the input text\nresponse = forge.generate(\n    \"Sophia Schmidt twenty three. Resident in Berlin Germany.\",\n    prompt_values={\"examples\": examples},\n)\n\nprint(response)  # ['Sophia Schmidt', '23', 'Berlin', 'Germany']\n```\n\n## Using your own prompt template\n\nIf you want to use your own prompt you can simply pass it to the `prompt` parameter of the `Forge` class. Any jinja2 template variables will be replaced with the values provided in the `prompt_values` parameter of the `generate` method.\n\n\u26a0\ufe0f If using your own prompt include a jinja template variable called `response_model_json` to place your response model json schema in your preferred location. If `response_model_json` is not provided then the default response model template text will be appended to the end of your prompt.\n\nHere is an example of using a custom prompt that includes the `response_model_json` template variable.\n\n```python\nimport inspect\n\nfrom modelsmith import Forge, OpenAIModel\n\n# Create your custom prompt\nmy_prompt = inspect.cleandoc(\"\"\"\n    You are extracting city names from user provided text. You are only to extract\n    city names and you should ignore country names or any other entities that are not\n    cities.\n\n    You MUST take the types of the OUTPUT SCHEMA into account and adjust your\n    provided text to fit the required types.\n\n    Here is the OUTPUT SCHEMA:\n    {{ response_model_json }}\n\"\"\")\n\n# Create your forge instance, passing your prompt\nforge = Forge(\n    model=OpenAIModel(\"gpt-4o\"),\n    response_model=list,\n    prompt=my_prompt,\n)\n\n# Generate a your response\nresponse = forge.generate(\n    \"Berlin is the capital of Germany. London is the capital of England.\"\n)\n\nprint(response)  # ['Berlin', 'London']\n```\n\nThe same example above would also work if the `response_model_json` was left out of the prompt due to this being added automatically if missing.\n\n```python\nimport inspect\n\nfrom modelsmith import Forge, VertexAITextGenerationModel\n\n# Create your custom prompt\nmy_prompt = inspect.cleandoc(\"\"\"\n    You are extracting city names from user provided text. You are only to extract\n    city names and you should ignore country names or any other entities that are not\n    cities.\n\"\"\")\n\n# Create your forge instance, passing your prompt\nforge = Forge(\n    model=VertexAITextGenerationModel(\"text-bison\"),\n    response_model=list,\n    prompt=my_prompt,\n)\n\n# Generate a your response\nresponse = forge.generate(\n    \"Berlin is the capital of Germany. London is the capital of England.\"\n)\n\nprint(response)  # ['Berlin', 'London']\n```\n\n## Placing user_input inside your prompt\n\nBy default user input is appended to the end of both custom and default prompts. Modelsmith allows you to place user input anywhere inside your custom prompt by adding the template variable `{{ user_input }}` where you want the user input to go.\n\n```python\n# Create your custom prompt with user input placed at the beginning\nmy_prompt = inspect.cleandoc(\"\"\"\n    Consider the following user input: {{ user_input }}\n\n    You are extracting numbers from user input and combing them into one number. \n    Take into account numbers written as text as well as in numerical format.\n\"\"\")\n```\n\n## Setting the number of retries\n\nBy default Modelsmith will try to get the desired response model from the LLM three times before raising an exception. On each retry the validation error is fed back to the LLM with a request to correct it. \n\nYou can change this by passing the `max_retries` parameter to the `Forge` class.\n\n```python\n# Create your forge instance, setting the number of retries\nforge = Forge(\n    model=VertexAIGenerativeModel(\"gemini-1.0-pro\"), response_model=int, max_retries=2\n)\n```\n\n## Matching patterns\n\nModelsmith looks for JSON output in the LLM response. It uses regular expressions to identify JSON output. If for any reason you want to use a different pattern you can pass it to the `match_pattern` parameter of the `Forge` class.\n\n## Failing silently\n\nModelsmith will raise a `ModelNotDerivedError` exception if no valid response was obtained. You can change this by passing `False` to the `raise_on_failure` parameter of the `Forge` class.\n\nThis will suppress the exception and return `None` instead.\n\n## Passing prompt template variables and model settings\n\nYou can pass prompt template variables and model settings by passing them to the `prompt_values` and `model_settings` parameters of the `generate` method.\n\n\n```python\nimport inspect\n\nfrom modelsmith import Forge, OpenAIModel\n\n# Create your custom prompt\nmy_prompt = inspect.cleandoc(\"\"\"\n    You are extracting city names from user provided text. You are only to extract\n    city names and you should ignore country names or any other entities that are not\n    cities.\n\n    {{ user_input_prefix }}\n    {{ user_input }}\n\"\"\")\n\n# Create your forge instance, passing your prompt\nforge = Forge(\n    model=OpenAIModel(\"gpt-4o\"),\n    response_model=list,\n    prompt=my_prompt,\n    max_retries=2,\n)\n\n# Custom LLM settings\nmodel_settings = {\n    \"temperature\": 0.8,\n    \"top_p\": 1.0,\n}\n\n# Prompt template variable values to pass\nprompt_values = {\n    \"user_input_prefix\": \"I have a the following text to analyze: \",\n}\n\n# Generate a your response\nresponse = forge.generate(\n    \"Berlin is the capital of Germany. London is the capital of England.\",\n    prompt_values=prompt_values,\n    model_settings=model_settings,\n)\n\nprint(response)  # ['Berlin', 'London']\n```\n\n## Learn more\n\nHave a look at the tests included in this repository for more examples.\n\n# Get in touch\n\nIf you have any questions or suggestions, feel free to open an issue or start a discussion.\n\n# License\n\nThis project is licensed under the terms of the MIT License.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Get Pydantic models and Python types as LLM responses from Google Vertex AI and OpenAI models.",
    "version": "0.6.1",
    "project_urls": {
        "Homepage": "https://github.com/christo-olivier/modelsmith",
        "Repository": "https://github.com/christo-olivier/modelsmith"
    },
    "split_keywords": [
        "vertexai",
        " pydantic",
        " models",
        " types",
        " llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "97c55f03f560d603e2b0db30ef6257172c95593f7bbfbc29891bb55350fcfb3b",
                "md5": "ed67c900d28d9db6663de934ddc82169",
                "sha256": "8616b5aef7a9b0a3b57098786c68a3cb0840a31f7c10a38b07ab728d2d0c361a"
            },
            "downloads": -1,
            "filename": "modelsmith-0.6.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ed67c900d28d9db6663de934ddc82169",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 14065,
            "upload_time": "2024-07-09T12:24:21",
            "upload_time_iso_8601": "2024-07-09T12:24:21.233015Z",
            "url": "https://files.pythonhosted.org/packages/97/c5/5f03f560d603e2b0db30ef6257172c95593f7bbfbc29891bb55350fcfb3b/modelsmith-0.6.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "418c9c605e2dcad0741f0ff3e53a37431442d1c61a8496295ecc3dc1abd0e6d9",
                "md5": "a66e4df90791e577adde2cae785360fa",
                "sha256": "63b54adeb5163081122a387fda65656a573df2c0fb3b5c057a2f416f7fcdd16d"
            },
            "downloads": -1,
            "filename": "modelsmith-0.6.1.tar.gz",
            "has_sig": false,
            "md5_digest": "a66e4df90791e577adde2cae785360fa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 15302,
            "upload_time": "2024-07-09T12:24:22",
            "upload_time_iso_8601": "2024-07-09T12:24:22.700650Z",
            "url": "https://files.pythonhosted.org/packages/41/8c/9c605e2dcad0741f0ff3e53a37431442d1c61a8496295ecc3dc1abd0e6d9/modelsmith-0.6.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-09 12:24:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "christo-olivier",
    "github_project": "modelsmith",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "modelsmith"
}
        
Elapsed time: 2.10294s