# LangSmith Client SDK
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langsmith-sdk?logo=python)](https://github.com/langchain-ai/langsmith-sdk/releases)
[![Python Downloads](https://static.pepy.tech/badge/langsmith/month)](https://pepy.tech/project/langsmith)
This package contains the Python client for interacting with the [LangSmith platform](https://smith.langchain.com/).
To install:
```bash
pip install -U langsmith
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=ls_...
```
Then trace:
```python
import openai
from langsmith.wrappers import wrap_openai
from langsmith import traceable
# Auto-trace LLM calls in-context
client = wrap_openai(openai.Client())
@traceable # Auto-trace this function
def pipeline(user_input: str):
result = client.chat.completions.create(
messages=[{"role": "user", "content": user_input}],
model="gpt-3.5-turbo"
)
return result.choices[0].message.content
pipeline("Hello, world!")
```
See the resulting nested trace [🌐 here](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r).
LangSmith helps you and your team develop and evaluate language models and intelligent agents. It is compatible with any LLM application.
> **Cookbook:** For tutorials on how to get more value out of LangSmith, check out the [Langsmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook/tree/main) repo.
A typical workflow looks like:
1. Set up an account with LangSmith.
2. Log traces while debugging and prototyping.
3. Run benchmark evaluations and continuously improve with the collected data.
We'll walk through these steps in more detail below.
## 1. Connect to LangSmith
Sign up for [LangSmith](https://smith.langchain.com/) using your GitHub, Discord accounts, or an email address and password. If you sign up with an email, make sure to verify your email address before logging in.
Then, create a unique API key on the [Settings Page](https://smith.langchain.com/settings), which is found in the menu at the top right corner of the page.
Note: Save the API Key in a secure location. It will not be shown again.
## 2. Log Traces
You can log traces natively using the LangSmith SDK or within your LangChain application.
### Logging Traces with LangChain
LangSmith seamlessly integrates with the Python LangChain library to record traces from your LLM applications.
1. **Copy the environment variables from the Settings Page and add them to your application.**
Tracing can be activated by setting the following environment variables or by manually specifying the LangChainTracer.
```python
import os
os.environ["LANGSMITH_TRACING_V2"] = "true"
os.environ["LANGSMITH_ENDPOINT"] = "https://api.smith.langchain.com"
# os.environ["LANGSMITH_ENDPOINT"] = "https://eu.api.smith.langchain.com" # If signed up in the EU region
os.environ["LANGSMITH_API_KEY"] = "<YOUR-LANGSMITH-API-KEY>"
# os.environ["LANGSMITH_PROJECT"] = "My Project Name" # Optional: "default" is used if not set
```
> **Tip:** Projects are groups of traces. All runs are logged to a project. If not specified, the project is set to `default`.
2. **Run an Agent, Chain, or Language Model in LangChain**
If the environment variables are correctly set, your application will automatically connect to the LangSmith platform.
```python
from langchain_core.runnables import chain
@chain
def add_val(x: dict) -> dict:
return {"val": x["val"] + 1}
add_val({"val": 1})
```
### Logging Traces Outside LangChain
You can still use the LangSmith development platform without depending on any
LangChain code.
1. **Copy the environment variables from the Settings Page and add them to your application.**
```python
import os
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = "<YOUR-LANGSMITH-API-KEY>"
# os.environ["LANGCHAIN_PROJECT"] = "My Project Name" # Optional: "default" is used if not set
```
2. **Log traces**
The easiest way to log traces using the SDK is via the `@traceable` decorator. Below is an example.
```python
from datetime import datetime
from typing import List, Optional, Tuple
import openai
from langsmith import traceable
from langsmith.wrappers import wrap_openai
client = wrap_openai(openai.Client())
@traceable
def argument_generator(query: str, additional_description: str = "") -> str:
return client.chat.completions.create(
[
{"role": "system", "content": "You are a debater making an argument on a topic."
f"{additional_description}"
f" The current time is {datetime.now()}"},
{"role": "user", "content": f"The discussion topic is {query}"}
]
).choices[0].message.content
@traceable
def argument_chain(query: str, additional_description: str = "") -> str:
argument = argument_generator(query, additional_description)
# ... Do other processing or call other functions...
return argument
argument_chain("Why is blue better than orange?")
```
Alternatively, you can manually log events using the `Client` directly or using a `RunTree`, which is what the traceable decorator is meant to manage for you!
A RunTree tracks your application. Each RunTree object is required to have a `name` and `run_type`. These and other important attributes are as follows:
- `name`: `str` - used to identify the component's purpose
- `run_type`: `str` - Currently one of "llm", "chain" or "tool"; more options will be added in the future
- `inputs`: `dict` - the inputs to the component
- `outputs`: `Optional[dict]` - the (optional) returned values from the component
- `error`: `Optional[str]` - Any error messages that may have arisen during the call
```python
from langsmith.run_trees import RunTree
parent_run = RunTree(
name="My Chat Bot",
run_type="chain",
inputs={"text": "Summarize this morning's meetings."},
# project_name= "Defaults to the LANGCHAIN_PROJECT env var"
)
parent_run.post()
# .. My Chat Bot calls an LLM
child_llm_run = parent_run.create_child(
name="My Proprietary LLM",
run_type="llm",
inputs={
"prompts": [
"You are an AI Assistant. The time is XYZ."
" Summarize this morning's meetings."
]
},
)
child_llm_run.post()
child_llm_run.end(
outputs={
"generations": [
"I should use the transcript_loader tool"
" to fetch meeting_transcripts from XYZ"
]
}
)
child_llm_run.patch()
# .. My Chat Bot takes the LLM output and calls
# a tool / function for fetching transcripts ..
child_tool_run = parent_run.create_child(
name="transcript_loader",
run_type="tool",
inputs={"date": "XYZ", "content_type": "meeting_transcripts"},
)
child_tool_run.post()
# The tool returns meeting notes to the chat bot
child_tool_run.end(outputs={"meetings": ["Meeting1 notes.."]})
child_tool_run.patch()
child_chain_run = parent_run.create_child(
name="Unreliable Component",
run_type="tool",
inputs={"input": "Summarize these notes..."},
)
child_chain_run.post()
try:
# .... the component does work
raise ValueError("Something went wrong")
child_chain_run.end(outputs={"output": "foo"}
child_chain_run.patch()
except Exception as e:
child_chain_run.end(error=f"I errored again {e}")
child_chain_run.patch()
pass
# .. The chat agent recovers
parent_run.end(outputs={"output": ["The meeting notes are as follows:..."]})
res = parent_run.patch()
res.result()
```
## Create a Dataset from Existing Runs
Once your runs are stored in LangSmith, you can convert them into a dataset.
For this example, we will do so using the Client, but you can also do this using
the web interface, as explained in the [LangSmith docs](https://docs.smith.langchain.com/docs/).
```python
from langsmith import Client
client = Client()
dataset_name = "Example Dataset"
# We will only use examples from the top level AgentExecutor run here,
# and exclude runs that errored.
runs = client.list_runs(
project_name="my_project",
execution_order=1,
error=False,
)
dataset = client.create_dataset(dataset_name, description="An example dataset")
for run in runs:
client.create_example(
inputs=run.inputs,
outputs=run.outputs,
dataset_id=dataset.id,
)
```
## Evaluating Runs
Check out the [LangSmith Testing & Evaluation dos](https://docs.smith.langchain.com/docs/evaluation/) for up-to-date workflows.
For generating automated feedback on individual runs, you can run evaluations directly using the LangSmith client.
```python
from typing import Optional
from langsmith.evaluation import StringEvaluator
def jaccard_chars(output: str, answer: str) -> float:
"""Naive Jaccard similarity between two strings."""
prediction_chars = set(output.strip().lower())
answer_chars = set(answer.strip().lower())
intersection = prediction_chars.intersection(answer_chars)
union = prediction_chars.union(answer_chars)
return len(intersection) / len(union)
def grader(run_input: str, run_output: str, answer: Optional[str]) -> dict:
"""Compute the score and/or label for this run."""
if answer is None:
value = "AMBIGUOUS"
score = 0.5
else:
score = jaccard_chars(run_output, answer)
value = "CORRECT" if score > 0.9 else "INCORRECT"
return dict(score=score, value=value)
evaluator = StringEvaluator(evaluation_name="Jaccard", grading_function=grader)
runs = client.list_runs(
project_name="my_project",
execution_order=1,
error=False,
)
for run in runs:
client.evaluate_run(run, evaluator)
```
## Integrations
LangSmith easily integrates with your favorite LLM framework.
## OpenAI SDK
<!-- markdown-link-check-disable -->
We provide a convenient wrapper for the [OpenAI SDK](https://platform.openai.com/docs/api-reference).
In order to use, you first need to set your LangSmith API key.
```shell
export LANGCHAIN_API_KEY=<your-api-key>
```
Next, you will need to install the LangSmith SDK:
```shell
pip install -U langsmith
```
After that, you can wrap the OpenAI client:
```python
from openai import OpenAI
from langsmith import wrappers
client = wrappers.wrap_openai(OpenAI())
```
Now, you can use the OpenAI client as you normally would, but now everything is logged to LangSmith!
```python
client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
)
```
Oftentimes, you use the OpenAI client inside of other functions.
You can get nested traces by using this wrapped client and decorating those functions with `@traceable`.
See [this documentation](https://docs.smith.langchain.com/tracing/faq/logging_and_viewing) for more documentation how to use this decorator
```python
from langsmith import traceable
@traceable(name="Call OpenAI")
def my_function(text: str):
return client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": f"Say {text}"}],
)
my_function("hello world")
```
# Instructor
We provide a convenient integration with [Instructor](https://jxnl.github.io/instructor/), largely by virtue of it essentially just using the OpenAI SDK.
In order to use, you first need to set your LangSmith API key.
```shell
export LANGCHAIN_API_KEY=<your-api-key>
```
Next, you will need to install the LangSmith SDK:
```shell
pip install -U langsmith
```
After that, you can wrap the OpenAI client:
```python
from openai import OpenAI
from langsmith import wrappers
client = wrappers.wrap_openai(OpenAI())
```
After this, you can patch the OpenAI client using `instructor`:
```python
import instructor
client = instructor.patch(OpenAI())
```
Now, you can use `instructor` as you normally would, but now everything is logged to LangSmith!
```python
from pydantic import BaseModel
class UserDetail(BaseModel):
name: str
age: int
user = client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=UserDetail,
messages=[
{"role": "user", "content": "Extract Jason is 25 years old"},
]
)
```
Oftentimes, you use `instructor` inside of other functions.
You can get nested traces by using this wrapped client and decorating those functions with `@traceable`.
See [this documentation](https://docs.smith.langchain.com/tracing/faq/logging_and_viewing) for more documentation how to use this decorator
```python
@traceable()
def my_function(text: str) -> UserDetail:
return client.chat.completions.create(
model="gpt-3.5-turbo",
response_model=UserDetail,
messages=[
{"role": "user", "content": f"Extract {text}"},
]
)
my_function("Jason is 25 years old")
```
## Additional Documentation
To learn more about the LangSmith platform, check out the [docs](https://docs.smith.langchain.com/docs/).
Raw data
{
"_id": null,
"home_page": "https://smith.langchain.com/",
"name": "langsmith",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8.1",
"maintainer_email": null,
"keywords": "langsmith, langchain, llm, nlp, language, translation, evaluation, tracing, platform",
"author": "LangChain",
"author_email": "support@langchain.dev",
"download_url": "https://files.pythonhosted.org/packages/73/53/39a9813b847014f6e954518635505758e96bd6a7b873cbf0f3c6b396f954/langsmith-0.1.144.tar.gz",
"platform": null,
"description": "# LangSmith Client SDK\n [![Release Notes](https://img.shields.io/github/release/langchain-ai/langsmith-sdk?logo=python)](https://github.com/langchain-ai/langsmith-sdk/releases)\n [![Python Downloads](https://static.pepy.tech/badge/langsmith/month)](https://pepy.tech/project/langsmith)\n\nThis package contains the Python client for interacting with the [LangSmith platform](https://smith.langchain.com/).\n\nTo install:\n\n```bash\npip install -U langsmith\nexport LANGSMITH_TRACING=true\nexport LANGSMITH_API_KEY=ls_...\n```\n\nThen trace:\n\n```python\nimport openai\nfrom langsmith.wrappers import wrap_openai\nfrom langsmith import traceable\n\n# Auto-trace LLM calls in-context\nclient = wrap_openai(openai.Client())\n\n@traceable # Auto-trace this function\ndef pipeline(user_input: str):\n result = client.chat.completions.create(\n messages=[{\"role\": \"user\", \"content\": user_input}],\n model=\"gpt-3.5-turbo\"\n )\n return result.choices[0].message.content\n\npipeline(\"Hello, world!\")\n```\nSee the resulting nested trace [\ud83c\udf10 here](https://smith.langchain.com/public/b37ca9b1-60cd-4a2a-817e-3c4e4443fdc0/r).\n\nLangSmith helps you and your team develop and evaluate language models and intelligent agents. It is compatible with any LLM application.\n\n> **Cookbook:** For tutorials on how to get more value out of LangSmith, check out the [Langsmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook/tree/main) repo.\n\nA typical workflow looks like:\n\n1. Set up an account with LangSmith.\n2. Log traces while debugging and prototyping.\n3. Run benchmark evaluations and continuously improve with the collected data.\n\nWe'll walk through these steps in more detail below.\n\n## 1. Connect to LangSmith\n\nSign up for [LangSmith](https://smith.langchain.com/) using your GitHub, Discord accounts, or an email address and password. If you sign up with an email, make sure to verify your email address before logging in.\n\nThen, create a unique API key on the [Settings Page](https://smith.langchain.com/settings), which is found in the menu at the top right corner of the page.\n\nNote: Save the API Key in a secure location. It will not be shown again.\n\n## 2. Log Traces\n\nYou can log traces natively using the LangSmith SDK or within your LangChain application.\n\n### Logging Traces with LangChain\n\nLangSmith seamlessly integrates with the Python LangChain library to record traces from your LLM applications.\n\n1. **Copy the environment variables from the Settings Page and add them to your application.**\n\nTracing can be activated by setting the following environment variables or by manually specifying the LangChainTracer.\n\n```python\nimport os\nos.environ[\"LANGSMITH_TRACING_V2\"] = \"true\"\nos.environ[\"LANGSMITH_ENDPOINT\"] = \"https://api.smith.langchain.com\"\n# os.environ[\"LANGSMITH_ENDPOINT\"] = \"https://eu.api.smith.langchain.com\" # If signed up in the EU region\nos.environ[\"LANGSMITH_API_KEY\"] = \"<YOUR-LANGSMITH-API-KEY>\"\n# os.environ[\"LANGSMITH_PROJECT\"] = \"My Project Name\" # Optional: \"default\" is used if not set\n```\n\n> **Tip:** Projects are groups of traces. All runs are logged to a project. If not specified, the project is set to `default`.\n\n2. **Run an Agent, Chain, or Language Model in LangChain**\n\nIf the environment variables are correctly set, your application will automatically connect to the LangSmith platform.\n\n```python\nfrom langchain_core.runnables import chain\n\n@chain\ndef add_val(x: dict) -> dict:\n return {\"val\": x[\"val\"] + 1}\n\nadd_val({\"val\": 1})\n```\n\n### Logging Traces Outside LangChain\n\nYou can still use the LangSmith development platform without depending on any\nLangChain code.\n\n1. **Copy the environment variables from the Settings Page and add them to your application.**\n\n```python\nimport os\nos.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.smith.langchain.com\"\nos.environ[\"LANGCHAIN_API_KEY\"] = \"<YOUR-LANGSMITH-API-KEY>\"\n# os.environ[\"LANGCHAIN_PROJECT\"] = \"My Project Name\" # Optional: \"default\" is used if not set\n```\n\n2. **Log traces**\n\nThe easiest way to log traces using the SDK is via the `@traceable` decorator. Below is an example.\n\n```python\nfrom datetime import datetime\nfrom typing import List, Optional, Tuple\n\nimport openai\nfrom langsmith import traceable\nfrom langsmith.wrappers import wrap_openai\n\nclient = wrap_openai(openai.Client())\n\n@traceable\ndef argument_generator(query: str, additional_description: str = \"\") -> str:\n return client.chat.completions.create(\n [\n {\"role\": \"system\", \"content\": \"You are a debater making an argument on a topic.\"\n f\"{additional_description}\"\n f\" The current time is {datetime.now()}\"},\n {\"role\": \"user\", \"content\": f\"The discussion topic is {query}\"}\n ]\n ).choices[0].message.content\n\n\n\n@traceable\ndef argument_chain(query: str, additional_description: str = \"\") -> str:\n argument = argument_generator(query, additional_description)\n # ... Do other processing or call other functions...\n return argument\n\nargument_chain(\"Why is blue better than orange?\")\n```\n\nAlternatively, you can manually log events using the `Client` directly or using a `RunTree`, which is what the traceable decorator is meant to manage for you!\n\nA RunTree tracks your application. Each RunTree object is required to have a `name` and `run_type`. These and other important attributes are as follows:\n\n- `name`: `str` - used to identify the component's purpose\n- `run_type`: `str` - Currently one of \"llm\", \"chain\" or \"tool\"; more options will be added in the future\n- `inputs`: `dict` - the inputs to the component\n- `outputs`: `Optional[dict]` - the (optional) returned values from the component\n- `error`: `Optional[str]` - Any error messages that may have arisen during the call\n\n```python\nfrom langsmith.run_trees import RunTree\n\nparent_run = RunTree(\n name=\"My Chat Bot\",\n run_type=\"chain\",\n inputs={\"text\": \"Summarize this morning's meetings.\"},\n # project_name= \"Defaults to the LANGCHAIN_PROJECT env var\"\n)\nparent_run.post()\n# .. My Chat Bot calls an LLM\nchild_llm_run = parent_run.create_child(\n name=\"My Proprietary LLM\",\n run_type=\"llm\",\n inputs={\n \"prompts\": [\n \"You are an AI Assistant. The time is XYZ.\"\n \" Summarize this morning's meetings.\"\n ]\n },\n)\nchild_llm_run.post()\nchild_llm_run.end(\n outputs={\n \"generations\": [\n \"I should use the transcript_loader tool\"\n \" to fetch meeting_transcripts from XYZ\"\n ]\n }\n)\nchild_llm_run.patch()\n# .. My Chat Bot takes the LLM output and calls\n# a tool / function for fetching transcripts ..\nchild_tool_run = parent_run.create_child(\n name=\"transcript_loader\",\n run_type=\"tool\",\n inputs={\"date\": \"XYZ\", \"content_type\": \"meeting_transcripts\"},\n)\nchild_tool_run.post()\n# The tool returns meeting notes to the chat bot\nchild_tool_run.end(outputs={\"meetings\": [\"Meeting1 notes..\"]})\nchild_tool_run.patch()\n\nchild_chain_run = parent_run.create_child(\n name=\"Unreliable Component\",\n run_type=\"tool\",\n inputs={\"input\": \"Summarize these notes...\"},\n)\nchild_chain_run.post()\n\ntry:\n # .... the component does work\n raise ValueError(\"Something went wrong\")\n child_chain_run.end(outputs={\"output\": \"foo\"}\n child_chain_run.patch()\nexcept Exception as e:\n child_chain_run.end(error=f\"I errored again {e}\")\n child_chain_run.patch()\n pass\n# .. The chat agent recovers\n\nparent_run.end(outputs={\"output\": [\"The meeting notes are as follows:...\"]})\nres = parent_run.patch()\nres.result()\n```\n\n## Create a Dataset from Existing Runs\n\nOnce your runs are stored in LangSmith, you can convert them into a dataset.\nFor this example, we will do so using the Client, but you can also do this using\nthe web interface, as explained in the [LangSmith docs](https://docs.smith.langchain.com/docs/).\n\n```python\nfrom langsmith import Client\n\nclient = Client()\ndataset_name = \"Example Dataset\"\n# We will only use examples from the top level AgentExecutor run here,\n# and exclude runs that errored.\nruns = client.list_runs(\n project_name=\"my_project\",\n execution_order=1,\n error=False,\n)\n\ndataset = client.create_dataset(dataset_name, description=\"An example dataset\")\nfor run in runs:\n client.create_example(\n inputs=run.inputs,\n outputs=run.outputs,\n dataset_id=dataset.id,\n )\n```\n\n## Evaluating Runs\n\nCheck out the [LangSmith Testing & Evaluation dos](https://docs.smith.langchain.com/docs/evaluation/) for up-to-date workflows.\n\nFor generating automated feedback on individual runs, you can run evaluations directly using the LangSmith client.\n\n```python\nfrom typing import Optional\nfrom langsmith.evaluation import StringEvaluator\n\n\ndef jaccard_chars(output: str, answer: str) -> float:\n \"\"\"Naive Jaccard similarity between two strings.\"\"\"\n prediction_chars = set(output.strip().lower())\n answer_chars = set(answer.strip().lower())\n intersection = prediction_chars.intersection(answer_chars)\n union = prediction_chars.union(answer_chars)\n return len(intersection) / len(union)\n\n\ndef grader(run_input: str, run_output: str, answer: Optional[str]) -> dict:\n \"\"\"Compute the score and/or label for this run.\"\"\"\n if answer is None:\n value = \"AMBIGUOUS\"\n score = 0.5\n else:\n score = jaccard_chars(run_output, answer)\n value = \"CORRECT\" if score > 0.9 else \"INCORRECT\"\n return dict(score=score, value=value)\n\nevaluator = StringEvaluator(evaluation_name=\"Jaccard\", grading_function=grader)\n\nruns = client.list_runs(\n project_name=\"my_project\",\n execution_order=1,\n error=False,\n)\nfor run in runs:\n client.evaluate_run(run, evaluator)\n```\n\n\n## Integrations\n\nLangSmith easily integrates with your favorite LLM framework.\n\n## OpenAI SDK\n\n<!-- markdown-link-check-disable -->\nWe provide a convenient wrapper for the [OpenAI SDK](https://platform.openai.com/docs/api-reference).\n\nIn order to use, you first need to set your LangSmith API key.\n\n```shell\nexport LANGCHAIN_API_KEY=<your-api-key>\n```\n\nNext, you will need to install the LangSmith SDK:\n\n```shell\npip install -U langsmith\n```\n\nAfter that, you can wrap the OpenAI client:\n\n```python\nfrom openai import OpenAI\nfrom langsmith import wrappers\n\nclient = wrappers.wrap_openai(OpenAI())\n```\n\nNow, you can use the OpenAI client as you normally would, but now everything is logged to LangSmith!\n\n```python\nclient.chat.completions.create(\n model=\"gpt-4\",\n messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}],\n)\n```\n\nOftentimes, you use the OpenAI client inside of other functions.\nYou can get nested traces by using this wrapped client and decorating those functions with `@traceable`.\nSee [this documentation](https://docs.smith.langchain.com/tracing/faq/logging_and_viewing) for more documentation how to use this decorator\n\n```python\nfrom langsmith import traceable\n\n@traceable(name=\"Call OpenAI\")\ndef my_function(text: str):\n return client.chat.completions.create(\n model=\"gpt-4\",\n messages=[{\"role\": \"user\", \"content\": f\"Say {text}\"}],\n )\n\nmy_function(\"hello world\")\n```\n\n# Instructor\n\nWe provide a convenient integration with [Instructor](https://jxnl.github.io/instructor/), largely by virtue of it essentially just using the OpenAI SDK.\n\nIn order to use, you first need to set your LangSmith API key.\n\n```shell\nexport LANGCHAIN_API_KEY=<your-api-key>\n```\n\nNext, you will need to install the LangSmith SDK:\n\n```shell\npip install -U langsmith\n```\n\nAfter that, you can wrap the OpenAI client:\n\n```python\nfrom openai import OpenAI\nfrom langsmith import wrappers\n\nclient = wrappers.wrap_openai(OpenAI())\n```\n\nAfter this, you can patch the OpenAI client using `instructor`:\n\n```python\nimport instructor\n\nclient = instructor.patch(OpenAI())\n```\n\nNow, you can use `instructor` as you normally would, but now everything is logged to LangSmith!\n\n```python\nfrom pydantic import BaseModel\n\n\nclass UserDetail(BaseModel):\n name: str\n age: int\n\n\nuser = client.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n response_model=UserDetail,\n messages=[\n {\"role\": \"user\", \"content\": \"Extract Jason is 25 years old\"},\n ]\n)\n```\n\nOftentimes, you use `instructor` inside of other functions.\nYou can get nested traces by using this wrapped client and decorating those functions with `@traceable`.\nSee [this documentation](https://docs.smith.langchain.com/tracing/faq/logging_and_viewing) for more documentation how to use this decorator\n\n```python\n@traceable()\ndef my_function(text: str) -> UserDetail:\n return client.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n response_model=UserDetail,\n messages=[\n {\"role\": \"user\", \"content\": f\"Extract {text}\"},\n ]\n )\n\n\nmy_function(\"Jason is 25 years old\")\n```\n\n\n## Additional Documentation\n\nTo learn more about the LangSmith platform, check out the [docs](https://docs.smith.langchain.com/docs/).\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Client library to connect to the LangSmith LLM Tracing and Evaluation Platform.",
"version": "0.1.144",
"project_urls": {
"Documentation": "https://docs.smith.langchain.com/",
"Homepage": "https://smith.langchain.com/",
"Repository": "https://github.com/langchain-ai/langsmith-sdk"
},
"split_keywords": [
"langsmith",
" langchain",
" llm",
" nlp",
" language",
" translation",
" evaluation",
" tracing",
" platform"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7ecbf2b16d801d7e89b199f57332c2eb8262cd24bc5d6d1b19666f2c8af8d8c0",
"md5": "f3fb74426b471c94a8a0a515cc623b52",
"sha256": "08ffb975bff2e82fc6f5428837c64c074ea25102d08a25e256361a80812c6100"
},
"downloads": -1,
"filename": "langsmith-0.1.144-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f3fb74426b471c94a8a0a515cc623b52",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8.1",
"size": 310140,
"upload_time": "2024-11-20T22:11:44",
"upload_time_iso_8601": "2024-11-20T22:11:44.973760Z",
"url": "https://files.pythonhosted.org/packages/7e/cb/f2b16d801d7e89b199f57332c2eb8262cd24bc5d6d1b19666f2c8af8d8c0/langsmith-0.1.144-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "735339a9813b847014f6e954518635505758e96bd6a7b873cbf0f3c6b396f954",
"md5": "e890be7ec73197f481145560ca6ee340",
"sha256": "b621f358d5a33441d7b5e7264c376bf4ea82bfc62d7e41aafc0f8094e3bd6369"
},
"downloads": -1,
"filename": "langsmith-0.1.144.tar.gz",
"has_sig": false,
"md5_digest": "e890be7ec73197f481145560ca6ee340",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8.1",
"size": 298055,
"upload_time": "2024-11-20T22:11:46",
"upload_time_iso_8601": "2024-11-20T22:11:46.969621Z",
"url": "https://files.pythonhosted.org/packages/73/53/39a9813b847014f6e954518635505758e96bd6a7b873cbf0f3c6b396f954/langsmith-0.1.144.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-20 22:11:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "langchain-ai",
"github_project": "langsmith-sdk",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "langsmith"
}