Name | galileo JSON |
Version |
1.8.1
JSON |
| download |
home_page | None |
Summary | Client library for the Galileo platform. |
upload_time | 2025-07-11 22:26:43 |
maintainer | None |
docs_url | None |
author | Galileo Technologies Inc. |
requires_python | <3.14,>=3.9 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Galileo Python SDK
<div align="center">
<strong>The Python client library for the Galileo AI platform.</strong>
[![PyPI][pypi-badge]][pypi-url]
[![Python Version][python-badge]][python-url]
![codecov.io][codecov-url]
</div>
[pypi-badge]: https://img.shields.io/pypi/v/galileo.svg
[pypi-url]: https://pypi.org/project/galileo/
[python-badge]: https://img.shields.io/pypi/pyversions/galileo.svg
[python-url]: https://www.python.org/downloads/
[codecov-url]: https://codecov.io/github/rungalileo/galileo-python/coverage.svg?branch=main
## Getting Started
### Installation
`pip install galileo`
### Setup
Set the following environment variables:
- `GALILEO_API_KEY`: Your Galileo API key
- `GALILEO_PROJECT`: (Optional) Project name
- `GALILEO_LOG_STREAM`: (Optional) Log stream name
- `GALILEO_LOGGING_DISABLED`: (Optional) Disable collecting and sending logs to galileo.
Note: if you would like to point to an environment other than `app.galileo.ai`, you'll need to set the `GALILEO_CONSOLE_URL` environment variable.
### Usage
#### Logging traces
```python
import os
from galileo import galileo_context
from galileo.openai import openai
# If you've set your GALILEO_PROJECT and GALILEO_LOG_STREAM env vars, you can skip this step
galileo_context.init(project="your-project-name", log_stream="your-log-stream-name")
# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
def call_openai():
chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4o"
)
return chat_completion.choices[0].message.content
# This will create a single span trace with the OpenAI call
call_openai()
# This will upload the trace to Galileo
galileo_context.flush()
```
You can also use the `@log` decorator to log spans. Here's how to create a workflow span with two nested LLM spans:
```python
from galileo import log
@log
def make_nested_call():
call_openai()
call_openai()
# If you've set your GALILEO_PROJECT and GALILEO_LOG_STREAM env vars, you can skip this step
galileo_context.init(project="your-project-name", log_stream="your-log-stream-name")
# This will create a trace with a workflow span and two nested LLM spans containing the OpenAI calls
make_nested_call()
```
Here's how to create a retriever span using the decorator:
```python
from galileo import log
@log(span_type="retriever")
def retrieve_documents(query: str):
return ["doc1", "doc2"]
# This will create a trace with a retriever span containing the documents in the output
retrieve_documents(query="history")
```
Here's how to create a tool span using the decorator:
```python
from galileo import log
@log(span_type="tool")
def tool_call(input: str = "tool call input"):
return "tool call output"
# This will create a trace with a tool span containing the tool call output
tool_call(input="question")
# This will upload the trace to Galileo
galileo_context.flush()
```
In some cases, you may want to wrap a block of code to start and flush a trace automatically. You can do this using the `galileo_context` context manager:
```python
from galileo import galileo_context
# This will log a block of code to the project and log stream specified in the context manager
with galileo_context():
content = make_nested_call()
print(content)
```
`galileo_context` also allows you specify a separate project and log stream for the trace:
```python
from galileo import galileo_context
# This will log to the project and log stream specified in the context manager
with galileo_context(project="gen-ai-project", log_stream="test2"):
content = make_nested_call()
print(content)
```
You can also use the `GalileoLogger` for manual logging scenarios:
```python
from galileo.logger import GalileoLogger
# This will log to the project and log stream specified in the logger constructor
logger = GalileoLogger(project="gen-ai-project", log_stream="test3")
trace = logger.start_trace("Say this is a test")
logger.add_llm_span(
input="Say this is a test",
output="Hello, this is a test",
model="gpt-4o",
num_input_tokens=10,
num_output_tokens=3,
total_tokens=13,
duration_ns=1000,
)
logger.conclude(output="Hello, this is a test", duration_ns=1000)
logger.flush() # This will upload the trace to Galileo
```
OpenAI streaming example:
```python
import os
from galileo.openai import openai
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
stream = client.chat.completions.create(
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4o", stream=True,
)
# This will create a single span trace with the OpenAI call
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
```
In some cases (like long-running processes), it may be necessary to explicitly flush the trace to upload it to Galileo:
```python
import os
from galileo import galileo_context
from galileo.openai import openai
galileo_context.init(project="your-project-name", log_stream="your-log-stream-name")
# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
def call_openai():
chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4o"
)
return chat_completion.choices[0].message.content
# This will create a single span trace with the OpenAI call
call_openai()
# This will upload the trace to Galileo
galileo_context.flush()
```
Using the Langchain callback handler:
```python
from galileo.handlers.langchain import GalileoCallback
from langchain.schema import HumanMessage
from langchain_openai import ChatOpenAI
# You can optionally pass a GalileoLogger instance to the callback if you don't want to use the default context
callback = GalileoCallback()
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7, callbacks=[callback])
# Create a message with the user's query
messages = [HumanMessage(content="What is LangChain and how is it used with OpenAI?")]
# Make the API call
response = llm.invoke(messages)
print(response.content)
```
#### Datasets
Create a dataset:
```python
from galileo.datasets import create_dataset
create_dataset(
name="names",
content=[
{"name": "Lola"},
{"name": "Jo"},
]
)
```
Get a dataset:
```python
from galileo.datasets import get_dataset
dataset = get_dataset(name="names")
```
List all datasets:
```python
from galileo.datasets import list_datasets
datasets = list_datasets()
```
#### Experiments
Run an experiment with a prompt template:
```python
from galileo import Message, MessageRole
from galileo.datasets import get_dataset
from galileo.experiments import run_experiment
from galileo.prompts import create_prompt_template
prompt = create_prompt_template(
name="my-prompt",
project="new-project",
messages=[
Message(role=MessageRole.system, content="you are a helpful assistant"),
Message(role=MessageRole.user, content="why is sky blue?")
]
)
results = run_experiment(
"my-experiment",
dataset=get_dataset(name="storyteller-dataset"),
prompt=prompt,
metrics=["correctness"],
project="andrii-new-project",
)
```
Run an experiment with a runner function with local dataset:
```python
import openai
from galileo.experiments import run_experiment
dataset = [
{"name": "Lola"},
{"name": "Jo"},
]
def runner(input):
return openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": f"Say hello: {input['name']}"}
],
).choices[0].message.content
run_experiment(
"test experiment runner",
project="awesome-new-project",
dataset=dataset,
function=runner,
metrics=['output_tone'],
)
```
### Sessions
Sessions allow you to group related traces together. By default, a session is created for each trace and a session name is auto-generated. If you would like to override this, you can explicitly start a session:
```python
from galileo import GalileoLogger
logger = GalileoLogger(project="gen-ai-project", log_stream="my-log-stream")
session_id =logger.start_session(session_name="my-session-name")
...
logger.conclude()
logger.flush()
```
You can continue a previous session by using the same session ID that was previously generated:
```python
from galileo import GalileoLogger
logger = GalileoLogger(project="gen-ai-project", log_stream="my-log-stream")
logger.set_session(session_id="123e4567-e89b-12d3-a456-426614174000")
...
logger.conclude()
logger.flush()
```
All of this can also be done using the `galileo_context` context manager:
```python
from galileo import galileo_context
session_id = galileo_context.start_session(session_name="my-session-name")
# OR
galileo_context.set_session(session_id=session_id)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "galileo",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Galileo Technologies Inc.",
"author_email": "team@galileo.ai",
"download_url": "https://files.pythonhosted.org/packages/51/b3/4a1a01220ab42c70be9258290734685ae7cf9dcf73fbbc50b6cba5d5ad6d/galileo-1.8.1.tar.gz",
"platform": null,
"description": "# Galileo Python SDK\n\n<div align=\"center\">\n\n<strong>The Python client library for the Galileo AI platform.</strong>\n\n[![PyPI][pypi-badge]][pypi-url]\n[![Python Version][python-badge]][python-url]\n![codecov.io][codecov-url]\n\n</div>\n\n[pypi-badge]: https://img.shields.io/pypi/v/galileo.svg\n[pypi-url]: https://pypi.org/project/galileo/\n[python-badge]: https://img.shields.io/pypi/pyversions/galileo.svg\n[python-url]: https://www.python.org/downloads/\n[codecov-url]: https://codecov.io/github/rungalileo/galileo-python/coverage.svg?branch=main\n\n## Getting Started\n\n### Installation\n\n`pip install galileo`\n\n### Setup\n\nSet the following environment variables:\n\n- `GALILEO_API_KEY`: Your Galileo API key\n- `GALILEO_PROJECT`: (Optional) Project name\n- `GALILEO_LOG_STREAM`: (Optional) Log stream name\n- `GALILEO_LOGGING_DISABLED`: (Optional) Disable collecting and sending logs to galileo.\n\nNote: if you would like to point to an environment other than `app.galileo.ai`, you'll need to set the `GALILEO_CONSOLE_URL` environment variable.\n\n### Usage\n\n#### Logging traces\n\n```python\nimport os\n\nfrom galileo import galileo_context\nfrom galileo.openai import openai\n\n# If you've set your GALILEO_PROJECT and GALILEO_LOG_STREAM env vars, you can skip this step\ngalileo_context.init(project=\"your-project-name\", log_stream=\"your-log-stream-name\")\n\n# Initialize the Galileo wrapped OpenAI client\nclient = openai.OpenAI(api_key=os.environ.get(\"OPENAI_API_KEY\"))\n\ndef call_openai():\n chat_completion = client.chat.completions.create(\n messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}], model=\"gpt-4o\"\n )\n\n return chat_completion.choices[0].message.content\n\n\n# This will create a single span trace with the OpenAI call\ncall_openai()\n\n# This will upload the trace to Galileo\ngalileo_context.flush()\n```\n\nYou can also use the `@log` decorator to log spans. Here's how to create a workflow span with two nested LLM spans:\n\n```python\nfrom galileo import log\n\n@log\ndef make_nested_call():\n call_openai()\n call_openai()\n\n# If you've set your GALILEO_PROJECT and GALILEO_LOG_STREAM env vars, you can skip this step\ngalileo_context.init(project=\"your-project-name\", log_stream=\"your-log-stream-name\")\n\n# This will create a trace with a workflow span and two nested LLM spans containing the OpenAI calls\nmake_nested_call()\n```\n\nHere's how to create a retriever span using the decorator:\n\n```python\nfrom galileo import log\n\n@log(span_type=\"retriever\")\ndef retrieve_documents(query: str):\n return [\"doc1\", \"doc2\"]\n\n# This will create a trace with a retriever span containing the documents in the output\nretrieve_documents(query=\"history\")\n```\n\nHere's how to create a tool span using the decorator:\n\n```python\nfrom galileo import log\n\n@log(span_type=\"tool\")\ndef tool_call(input: str = \"tool call input\"):\n return \"tool call output\"\n\n# This will create a trace with a tool span containing the tool call output\ntool_call(input=\"question\")\n\n# This will upload the trace to Galileo\ngalileo_context.flush()\n```\n\nIn some cases, you may want to wrap a block of code to start and flush a trace automatically. You can do this using the `galileo_context` context manager:\n\n```python\nfrom galileo import galileo_context\n\n# This will log a block of code to the project and log stream specified in the context manager\nwith galileo_context():\n content = make_nested_call()\n print(content)\n```\n\n`galileo_context` also allows you specify a separate project and log stream for the trace:\n\n```python\nfrom galileo import galileo_context\n\n# This will log to the project and log stream specified in the context manager\nwith galileo_context(project=\"gen-ai-project\", log_stream=\"test2\"):\n content = make_nested_call()\n print(content)\n```\n\nYou can also use the `GalileoLogger` for manual logging scenarios:\n\n```python\nfrom galileo.logger import GalileoLogger\n\n# This will log to the project and log stream specified in the logger constructor\nlogger = GalileoLogger(project=\"gen-ai-project\", log_stream=\"test3\")\ntrace = logger.start_trace(\"Say this is a test\")\n\nlogger.add_llm_span(\n input=\"Say this is a test\",\n output=\"Hello, this is a test\",\n model=\"gpt-4o\",\n num_input_tokens=10,\n num_output_tokens=3,\n total_tokens=13,\n duration_ns=1000,\n)\n\nlogger.conclude(output=\"Hello, this is a test\", duration_ns=1000)\nlogger.flush() # This will upload the trace to Galileo\n```\n\nOpenAI streaming example:\n\n```python\nimport os\n\nfrom galileo.openai import openai\n\nclient = openai.OpenAI(api_key=os.environ.get(\"OPENAI_API_KEY\"))\n\nstream = client.chat.completions.create(\n messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}], model=\"gpt-4o\", stream=True,\n)\n\n# This will create a single span trace with the OpenAI call\nfor chunk in stream:\n print(chunk.choices[0].delta.content or \"\", end=\"\")\n```\n\nIn some cases (like long-running processes), it may be necessary to explicitly flush the trace to upload it to Galileo:\n\n```python\nimport os\n\nfrom galileo import galileo_context\nfrom galileo.openai import openai\n\ngalileo_context.init(project=\"your-project-name\", log_stream=\"your-log-stream-name\")\n\n# Initialize the Galileo wrapped OpenAI client\nclient = openai.OpenAI(api_key=os.environ.get(\"OPENAI_API_KEY\"))\n\ndef call_openai():\n chat_completion = client.chat.completions.create(\n messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}], model=\"gpt-4o\"\n )\n\n return chat_completion.choices[0].message.content\n\n\n# This will create a single span trace with the OpenAI call\ncall_openai()\n\n# This will upload the trace to Galileo\ngalileo_context.flush()\n```\n\nUsing the Langchain callback handler:\n\n```python\nfrom galileo.handlers.langchain import GalileoCallback\nfrom langchain.schema import HumanMessage\nfrom langchain_openai import ChatOpenAI\n\n# You can optionally pass a GalileoLogger instance to the callback if you don't want to use the default context\ncallback = GalileoCallback()\n\nllm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0.7, callbacks=[callback])\n\n# Create a message with the user's query\nmessages = [HumanMessage(content=\"What is LangChain and how is it used with OpenAI?\")]\n\n# Make the API call\nresponse = llm.invoke(messages)\n\nprint(response.content)\n```\n\n#### Datasets\n\nCreate a dataset:\n\n```python\nfrom galileo.datasets import create_dataset\n\ncreate_dataset(\n name=\"names\",\n content=[\n {\"name\": \"Lola\"},\n {\"name\": \"Jo\"},\n ]\n)\n```\n\nGet a dataset:\n\n```python\nfrom galileo.datasets import get_dataset\n\ndataset = get_dataset(name=\"names\")\n```\n\nList all datasets:\n\n```python\nfrom galileo.datasets import list_datasets\n\ndatasets = list_datasets()\n```\n\n#### Experiments\n\nRun an experiment with a prompt template:\n\n```python\nfrom galileo import Message, MessageRole\nfrom galileo.datasets import get_dataset\nfrom galileo.experiments import run_experiment\nfrom galileo.prompts import create_prompt_template\n\nprompt = create_prompt_template(\n name=\"my-prompt\",\n project=\"new-project\",\n messages=[\n Message(role=MessageRole.system, content=\"you are a helpful assistant\"),\n Message(role=MessageRole.user, content=\"why is sky blue?\")\n ]\n)\n\nresults = run_experiment(\n \"my-experiment\",\n dataset=get_dataset(name=\"storyteller-dataset\"),\n prompt=prompt,\n metrics=[\"correctness\"],\n project=\"andrii-new-project\",\n)\n```\n\nRun an experiment with a runner function with local dataset:\n\n```python\nimport openai\nfrom galileo.experiments import run_experiment\n\n\ndataset = [\n {\"name\": \"Lola\"},\n {\"name\": \"Jo\"},\n]\n\ndef runner(input):\n return openai.chat.completions.create(\n model=\"gpt-4o\",\n messages=[\n {\"role\": \"user\", \"content\": f\"Say hello: {input['name']}\"}\n ],\n ).choices[0].message.content\n\nrun_experiment(\n \"test experiment runner\",\n project=\"awesome-new-project\",\n dataset=dataset,\n function=runner,\n metrics=['output_tone'],\n)\n```\n\n### Sessions\n\nSessions allow you to group related traces together. By default, a session is created for each trace and a session name is auto-generated. If you would like to override this, you can explicitly start a session:\n\n```python\nfrom galileo import GalileoLogger\n\nlogger = GalileoLogger(project=\"gen-ai-project\", log_stream=\"my-log-stream\")\nsession_id =logger.start_session(session_name=\"my-session-name\")\n\n...\n\nlogger.conclude()\nlogger.flush()\n```\n\nYou can continue a previous session by using the same session ID that was previously generated:\n\n```python\nfrom galileo import GalileoLogger\n\nlogger = GalileoLogger(project=\"gen-ai-project\", log_stream=\"my-log-stream\")\nlogger.set_session(session_id=\"123e4567-e89b-12d3-a456-426614174000\")\n\n...\n\nlogger.conclude()\nlogger.flush()\n```\n\nAll of this can also be done using the `galileo_context` context manager:\n\n```python\nfrom galileo import galileo_context\n\nsession_id = galileo_context.start_session(session_name=\"my-session-name\")\n\n# OR\n\ngalileo_context.set_session(session_id=session_id)\n\n```\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Client library for the Galileo platform.",
"version": "1.8.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "dd0391910924ec8bbfe3b7dddfd19136a37b0e34ff6e82ddf7710b1b4681b0d1",
"md5": "07d95d300ba151e7a26c3cf5064a6392",
"sha256": "1a20e5cb4d3716a92a6f9165064a5d7f93535d394c4a5887eea54ffe0d49e50f"
},
"downloads": -1,
"filename": "galileo-1.8.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "07d95d300ba151e7a26c3cf5064a6392",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.9",
"size": 885741,
"upload_time": "2025-07-11T22:26:41",
"upload_time_iso_8601": "2025-07-11T22:26:41.872094Z",
"url": "https://files.pythonhosted.org/packages/dd/03/91910924ec8bbfe3b7dddfd19136a37b0e34ff6e82ddf7710b1b4681b0d1/galileo-1.8.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "51b34a1a01220ab42c70be9258290734685ae7cf9dcf73fbbc50b6cba5d5ad6d",
"md5": "08fd104df9095ba190f705cc7527d805",
"sha256": "c3adda559717bf927c16f27a6b2e1b7c6f1bc24d360b6f27d6af92e053b2bd90"
},
"downloads": -1,
"filename": "galileo-1.8.1.tar.gz",
"has_sig": false,
"md5_digest": "08fd104df9095ba190f705cc7527d805",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.9",
"size": 317544,
"upload_time": "2025-07-11T22:26:43",
"upload_time_iso_8601": "2025-07-11T22:26:43.330603Z",
"url": "https://files.pythonhosted.org/packages/51/b3/4a1a01220ab42c70be9258290734685ae7cf9dcf73fbbc50b6cba5d5ad6d/galileo-1.8.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-11 22:26:43",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "galileo"
}