# Galtea SDK
<p align="center">
<img src="https://galtea.ai/img/galtea_mod.png" alt="Galtea" width="500" height="auto"/>
</p>
<p align="center">
<strong>Comprehensive AI/LLM Testing & Evaluation Framework</strong>
</p>
<p align="center">
<a href="https://pypi.org/project/galtea/">
<img src="https://img.shields.io/pypi/v/galtea.svg" alt="PyPI version">
</a>
<a href="https://pypi.org/project/galtea/">
<img src="https://img.shields.io/pypi/pyversions/galtea.svg" alt="Python versions">
</a>
<a href="https://www.apache.org/licenses/LICENSE-2.0">
<img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License">
</a>
</p>
## Overview
Galtea SDK empowers AI engineers, ML engineers and data scientists to rigorously test and evaluate their AI products. With a focus on reliability and transparency, Galtea offers:
1. **Automated Test Dataset Generation** - Create comprehensive test datasets tailored to your AI product
2. **Sophisticated Product Evaluation** - Evaluate your AI products across multiple dimensions
## Installation
```bash
pip install galtea
```
## Development
### Building the Project
This project uses Poetry for dependency management and packaging. To build the project:
```bash
poetry build
```
This will create distribution packages (wheel and source distribution) in the `dist/` directory.
### Development Setup
```bash
# Install dependencies
poetry install
# Activate the virtual environment
poetry shell
```
## Quick Start
```python
from galtea import Galtea
import os
# Initialize with your API key
galtea = Galtea(api_key=os.getenv("GALTEA_API_KEY"))
# Create a test, which is a collection of test cases
test = galtea.tests.create(
name="factual-accuracy-test",
type="QUALITY",
product_id="your-product-id",
ground_truth_file_path="path/to/ground-truth.pdf"
)
# Get test cases to iterate over
test_cases = galtea.test_cases.list(test.id)
# Create a version, which is a specific iteration of your product
version = galtea.versions.create(
name="gpt-4-self-hosted-v1",
product_id="your-product-id",
description="Self-hosted GPT-4 equivalent model",
endpoint="http://your-model-endpoint.com/v1/chat"
)
for test_case in test_cases:
# Simulate a call to your product to get its output for a given test case
# In a real scenario, you would call your actual product endpoint
model_answer = f"The answer to '{test_case.input}' is..." # Run an evaluation task
# An Evaluation is implicitly created to group these tasks
galtea.evaluation_tasks.create_single_turn(
metrics=["factual-accuracy", "coherence", "relevance"],
version_id=version.id,
test_case_id=test_case.id,
actual_output=model_answer
)
```
## Core Features
### 1. Test Creation
- **Quality Tests**: Assess response quality, coherence, and factual accuracy
- **Adversarial Tests**: Stress-test your models against edge cases and potential vulnerabilities
- **Ground Truth Integration**: Upload ground truth documents to validate factual responses
- **Custom Test Types**: Define tests tailored to your specific use cases and requirements
```python
# Create a custom test with your own dataset
test = galtea.tests.create(
name="medical-knowledge-test",
type="QUALITY",
product_id="your-product-id",
ground_truth_file_path="medical_reference.pdf"
)
```
### 2. Comprehensive Product Evaluation
Evaluate your AI products with sophisticated metrics:
- **Multi-dimensional Analysis**: Analyze outputs across various dimensions including accuracy, relevance, and coherence
- **Customizable Metrics**: Define your own evaluation criteria and rubrics
- **Batch Processing**: Run evaluations on large datasets efficiently
- **Detailed Reports**: Get comprehensive insights into your model's performance
```python
# Define custom evaluation metrics
custom_metric = galtea.metrics.create(
name="medical-accuracy",
criteria="Assess if the response is medically accurate based on the provided context.",
evaluation_params=["actual output", "context"]
)
# Run batch evaluation
import pandas as pd
# Load your test data
test_data = pd.read_json("medical_queries.json")
# 1. Create a session for this batch evaluation
session = galtea.sessions.create(version_id=version.id, is_production=True)
# 2. Log each interaction as an inference result
for _, row in test_data.iterrows():
# Get response from your product
model_response = your_product_function(row["query"], row["medical_context"])
# Log each turn to the session
galtea.inference_results.create(
session_id=session.id,
input=row["query"],
output=model_response,
retrieval_context=row["medical_context"]
)
# 3. Evaluate the entire session at once
galtea.evaluation_tasks.create(
metrics=[custom_metric.name, "coherence", "toxicity"],
session_id=session.id
)
```
## Managing Your AI Products
Galtea provides a complete ecosystem for evaluating and monitoring your AI products:
### Products
Represents a functionality or service that is evaluated by Galtea.
```python
# List your products
products = galtea.products.list()
# Select a product to work with
product = products[0]
```
### Versions
Represents a specific iteration of a product. This allows for tracking improvements and regressions over time.
```python
# Create a new version of your product
version = galtea.versions.create(
name="gpt-4-fine-tuned-v2",
product_id=product.id,
description="Fine-tuned GPT-4 for medical domain",
model_id="gpt-4",
system_prompt="You are a helpful medical assistant..."
)
# List versions of your product
versions = galtea.versions.list(product_id=product.id)
```
### Tests
A collection of test cases designed to evaluate specific aspects of your product versions.
```python
# Create a test
test = galtea.tests.create(
name="medical-qa-test",
type="QUALITY",
product_id=product.id,
ground_truth_file_path="medical_data.pdf"
)
# Download a test file
test_file = galtea.tests.download(test, output_dir="tests")
```
### Test Cases
A single challenge for evaluating product performance. Each test case typically includes an input and may include an expected output and context.
### Sessions
A group of inference results that represent a complete conversation between a user and an AI system.
### Inference Results
A single turn in a conversation, including the user's input and the AI's output. These are the raw interactions that can be evaluated.
### Evaluations
A group of inference results from a session that can be evaluated. It acts as a container for all the evaluation tasks that measure how effectively the product version performs.
```python
# Evaluations are created implicitly when you log evaluation tasks.
# For example, when you run this, an evaluation is created behind the scenes:
galtea.evaluation_tasks.create_single_turn(
metrics=["factual-accuracy"],
version_id=version.id,
test_case_id=test_cases[0].id,
actual_output="Some output from your product."
)
# List evaluations for a product
evaluations = galtea.evaluations.list(product_id=product.id)
```
## Advanced Usage
### Custom Metrics
Define custom evaluation criteria specific to your needs:
```python
# Create a custom metric
custom_metric_1 = galtea.metrics.create(
name="patient-safety-score-v1",
criteria="Evaluate responses for patient safety considerations",
evaluation_params=["actual output"]
)
```
### Batch Processing
Efficiently evaluate your model on large datasets:
```python
import pandas as pd
import os
# Load your test queries from a JSON file
queries_file = os.path.join(os.path.dirname(__file__), 'test_data.json')
df = pd.read_json(queries_file)
# Create a session for this batch evaluation
session = galtea.sessions.create(version_id=version.id, is_production=True)
# Process each query
for idx, row in df.iterrows():
# Get your model's response to the query
model_response = your_product_function(row['query'])
# Log each turn to the session
galtea.inference_results.create(
session_id=session.id,
input=row['query'],
output=model_response
)
# Evaluate the entire session
galtea.evaluation_tasks.create(
metrics=["relevance", custom_metric_1.name],
session_id=session.id
)
```
## API Reference
### Main Classes
- **`Galtea`**: Main client for interacting with the Galtea platform
### Product Management
- **`galtea.products.list(offset=None, limit=None)`**: List available products
- **`galtea.products.get(product_id)`**: Get a specific product by ID
### Test Management
- **`galtea.tests.create(name, type, product_id, ground_truth_file_path=None, test_file_path=None)`**: Create a new test
- **`galtea.tests.get(test_id)`**: Retrieve a test by ID
- **`galtea.tests.list(product_id, offset=None, limit=None)`**: List tests for a product
- **`galtea.tests.download(test, output_dir)`**: Download test files in the selected directory.
### Test Cases Management
- **`galtea.test_cases.create(test_id, input, expected_output, context=None)`**: Create a new test case
- **`galtea.test_cases.get(test_case_id)`**: Get a test case by ID
- **`galtea.test_cases.list(test_id, offset=None, limit=None)`**: List test cases for a test
- **`galtea.test_cases.delete(test_case_id)`**: Delete a test case by ID
### Version Management
- **`galtea.versions.create(product_id, name, description=None, ...)`**: Create a new product version
- **`galtea.versions.get(version_id)`**: Get a version by ID
- **`galtea.versions.list(product_id, offset=None, limit=None)`**: List versions for a product
### Metric Management
- **`galtea.metrics.create(name, criteria=None, evaluation_steps=None, evaluation_params=None)`**: Create a custom metric
- **`galtea.metrics.get(metric_type_id)`**: Get a metric by ID
- **`galtea.metrics.list(offset=None, limit=None)`**: List available metrics
### Session Management
- **`galtea.sessions.create(version_id, ...)`**: Create a new session to log a conversation.
- **`galtea.sessions.get(session_id)`**: Get a session by ID.
- **`galtea.sessions.list(version_id, ...)`**: List sessions for a version.
- **`galtea.sessions.delete(session_id)`**: Delete a session by ID.
### Inference Result Management
- **`galtea.inference_results.create(session_id, input, output, ...)`**: Log a single turn in a conversation.
- **`galtea.inference_results.get(inference_result_id)`**: Get an inference result by ID.
- **`galtea.inference_results.list(session_id, ...)`**: List inference results for a session.
- **`galtea.inference_results.delete(inference_result_id)`**: Delete an inference result by ID.
### Evaluation Management
- An `Evaluation` is created implicitly when you create evaluation tasks.
- **`galtea.evaluations.get(evaluation_id)`**: Get an evaluation by ID
- **`galtea.evaluations.list(product_id, offset=None, limit=None)`**: List evaluations for a product
### Evaluation Tasks Management
- **`galtea.evaluation_tasks.list(evaluation_id, offset=None, limit=None)`**: List tasks performed for an evaluation
- **`galtea.evaluation_tasks.get(evaluation_task_id)`**: Get a specific task by ID
- **`galtea.evaluation_tasks.create(metrics, session_id)`**: Create evaluation tasks for all inference results within a given session.
- **`galtea.evaluation_tasks.create_single_turn(metrics, version_id, ...)`**: Create an evaluation task for a single-turn interaction, such as one based on a specific test case or a production query.
## Getting Help
- **Documentation**: [https://docs.galtea.ai/](https://docs.galtea.ai/)
- **Support**: [support@galtea.ai](mailto:support@galtea.ai)
## Authors
This software has been developed by the members of the product team of Galtea Solutions S.L.
## License
Apache License 2.0
Raw data
{
"_id": null,
"home_page": "https://galtea.ai/",
"name": "galtea",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.9",
"maintainer_email": null,
"keywords": "MLOps, ML Experiment Tracking, ML Model Registry, ML Model Store, ML Metadata Store",
"author": "galtea.ai",
"author_email": "info@galtea.ai",
"download_url": "https://files.pythonhosted.org/packages/f7/a5/a6a58967f2ce8fa6fb5946e6fa6bdfe19c501cf3f87941ba8ed1e924e999/galtea-2.4.0.tar.gz",
"platform": null,
"description": "# Galtea SDK\n\n<p align=\"center\">\n <img src=\"https://galtea.ai/img/galtea_mod.png\" alt=\"Galtea\" width=\"500\" height=\"auto\"/>\n</p>\n\n<p align=\"center\">\n <strong>Comprehensive AI/LLM Testing & Evaluation Framework</strong>\n</p>\n\n<p align=\"center\">\n\t<a href=\"https://pypi.org/project/galtea/\">\n\t\t<img src=\"https://img.shields.io/pypi/v/galtea.svg\" alt=\"PyPI version\">\n\t</a>\n\t<a href=\"https://pypi.org/project/galtea/\">\n\t\t<img src=\"https://img.shields.io/pypi/pyversions/galtea.svg\" alt=\"Python versions\">\n\t</a>\n\t<a href=\"https://www.apache.org/licenses/LICENSE-2.0\">\n\t\t<img src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\" alt=\"License\">\n\t</a>\n</p>\n\n## Overview\n\nGaltea SDK empowers AI engineers, ML engineers and data scientists to rigorously test and evaluate their AI products. With a focus on reliability and transparency, Galtea offers:\n\n1. **Automated Test Dataset Generation** - Create comprehensive test datasets tailored to your AI product\n2. **Sophisticated Product Evaluation** - Evaluate your AI products across multiple dimensions\n\n## Installation\n\n```bash\npip install galtea\n```\n\n## Development\n\n### Building the Project\n\nThis project uses Poetry for dependency management and packaging. To build the project:\n\n```bash\npoetry build\n```\n\nThis will create distribution packages (wheel and source distribution) in the `dist/` directory.\n\n### Development Setup\n\n```bash\n# Install dependencies\npoetry install\n\n# Activate the virtual environment\npoetry shell\n```\n\n## Quick Start\n\n```python\nfrom galtea import Galtea\nimport os\n\n# Initialize with your API key\ngaltea = Galtea(api_key=os.getenv(\"GALTEA_API_KEY\"))\n\n# Create a test, which is a collection of test cases\ntest = galtea.tests.create(\n name=\"factual-accuracy-test\",\n type=\"QUALITY\",\n product_id=\"your-product-id\",\n ground_truth_file_path=\"path/to/ground-truth.pdf\"\n)\n\n# Get test cases to iterate over\ntest_cases = galtea.test_cases.list(test.id)\n\n# Create a version, which is a specific iteration of your product\nversion = galtea.versions.create(\n name=\"gpt-4-self-hosted-v1\",\n product_id=\"your-product-id\",\n description=\"Self-hosted GPT-4 equivalent model\",\n endpoint=\"http://your-model-endpoint.com/v1/chat\"\n)\n\nfor test_case in test_cases:\n # Simulate a call to your product to get its output for a given test case\n # In a real scenario, you would call your actual product endpoint\n model_answer = f\"The answer to '{test_case.input}' is...\" # Run an evaluation task\n # An Evaluation is implicitly created to group these tasks\n galtea.evaluation_tasks.create_single_turn(\n metrics=[\"factual-accuracy\", \"coherence\", \"relevance\"],\n version_id=version.id,\n test_case_id=test_case.id,\n actual_output=model_answer\n )\n```\n\n## Core Features\n\n### 1. Test Creation\n\n- **Quality Tests**: Assess response quality, coherence, and factual accuracy\n- **Adversarial Tests**: Stress-test your models against edge cases and potential vulnerabilities\n- **Ground Truth Integration**: Upload ground truth documents to validate factual responses\n- **Custom Test Types**: Define tests tailored to your specific use cases and requirements\n\n```python\n# Create a custom test with your own dataset\ntest = galtea.tests.create(\n name=\"medical-knowledge-test\",\n type=\"QUALITY\",\n product_id=\"your-product-id\",\n ground_truth_file_path=\"medical_reference.pdf\"\n)\n```\n\n### 2. Comprehensive Product Evaluation\n\nEvaluate your AI products with sophisticated metrics:\n\n- **Multi-dimensional Analysis**: Analyze outputs across various dimensions including accuracy, relevance, and coherence\n- **Customizable Metrics**: Define your own evaluation criteria and rubrics\n- **Batch Processing**: Run evaluations on large datasets efficiently\n- **Detailed Reports**: Get comprehensive insights into your model's performance\n\n```python\n# Define custom evaluation metrics\ncustom_metric = galtea.metrics.create(\n name=\"medical-accuracy\",\n criteria=\"Assess if the response is medically accurate based on the provided context.\",\n evaluation_params=[\"actual output\", \"context\"]\n)\n\n# Run batch evaluation\nimport pandas as pd\n\n# Load your test data\ntest_data = pd.read_json(\"medical_queries.json\")\n\n# 1. Create a session for this batch evaluation\nsession = galtea.sessions.create(version_id=version.id, is_production=True)\n\n# 2. Log each interaction as an inference result\nfor _, row in test_data.iterrows():\n # Get response from your product\n model_response = your_product_function(row[\"query\"], row[\"medical_context\"])\n \n # Log each turn to the session\n galtea.inference_results.create(\n session_id=session.id,\n input=row[\"query\"],\n output=model_response,\n retrieval_context=row[\"medical_context\"]\n )\n\n# 3. Evaluate the entire session at once\ngaltea.evaluation_tasks.create(\n metrics=[custom_metric.name, \"coherence\", \"toxicity\"],\n session_id=session.id\n)\n```\n\n## Managing Your AI Products\n\nGaltea provides a complete ecosystem for evaluating and monitoring your AI products:\n\n### Products\n\nRepresents a functionality or service that is evaluated by Galtea.\n\n```python\n# List your products\nproducts = galtea.products.list()\n\n# Select a product to work with\nproduct = products[0]\n```\n\n### Versions\n\nRepresents a specific iteration of a product. This allows for tracking improvements and regressions over time.\n\n```python\n# Create a new version of your product\nversion = galtea.versions.create(\n name=\"gpt-4-fine-tuned-v2\",\n product_id=product.id,\n description=\"Fine-tuned GPT-4 for medical domain\",\n model_id=\"gpt-4\",\n system_prompt=\"You are a helpful medical assistant...\"\n)\n\n# List versions of your product\nversions = galtea.versions.list(product_id=product.id)\n```\n\n### Tests\n\nA collection of test cases designed to evaluate specific aspects of your product versions.\n\n```python\n# Create a test\ntest = galtea.tests.create(\n name=\"medical-qa-test\",\n type=\"QUALITY\",\n product_id=product.id,\n ground_truth_file_path=\"medical_data.pdf\"\n)\n\n# Download a test file\ntest_file = galtea.tests.download(test, output_dir=\"tests\")\n```\n\n### Test Cases\n\nA single challenge for evaluating product performance. Each test case typically includes an input and may include an expected output and context.\n\n### Sessions\n\nA group of inference results that represent a complete conversation between a user and an AI system.\n\n### Inference Results\n\nA single turn in a conversation, including the user's input and the AI's output. These are the raw interactions that can be evaluated.\n\n### Evaluations\n\nA group of inference results from a session that can be evaluated. It acts as a container for all the evaluation tasks that measure how effectively the product version performs.\n\n```python\n# Evaluations are created implicitly when you log evaluation tasks.\n# For example, when you run this, an evaluation is created behind the scenes:\ngaltea.evaluation_tasks.create_single_turn(\n metrics=[\"factual-accuracy\"],\n version_id=version.id,\n test_case_id=test_cases[0].id,\n actual_output=\"Some output from your product.\"\n)\n\n# List evaluations for a product\nevaluations = galtea.evaluations.list(product_id=product.id)\n```\n\n## Advanced Usage\n\n### Custom Metrics\n\nDefine custom evaluation criteria specific to your needs:\n\n```python\n# Create a custom metric\ncustom_metric_1 = galtea.metrics.create(\n name=\"patient-safety-score-v1\",\n criteria=\"Evaluate responses for patient safety considerations\",\n evaluation_params=[\"actual output\"]\n)\n```\n\n### Batch Processing\n\nEfficiently evaluate your model on large datasets:\n\n```python\nimport pandas as pd\nimport os\n\n# Load your test queries from a JSON file\nqueries_file = os.path.join(os.path.dirname(__file__), 'test_data.json')\ndf = pd.read_json(queries_file)\n\n# Create a session for this batch evaluation\nsession = galtea.sessions.create(version_id=version.id, is_production=True)\n\n# Process each query\nfor idx, row in df.iterrows():\n # Get your model's response to the query\n model_response = your_product_function(row['query'])\n\n # Log each turn to the session\n galtea.inference_results.create(\n session_id=session.id,\n input=row['query'],\n output=model_response\n )\n\n# Evaluate the entire session\ngaltea.evaluation_tasks.create(\n metrics=[\"relevance\", custom_metric_1.name],\n session_id=session.id\n)\n```\n\n## API Reference\n\n### Main Classes\n\n- **`Galtea`**: Main client for interacting with the Galtea platform\n\n### Product Management\n\n- **`galtea.products.list(offset=None, limit=None)`**: List available products\n- **`galtea.products.get(product_id)`**: Get a specific product by ID\n\n### Test Management\n\n- **`galtea.tests.create(name, type, product_id, ground_truth_file_path=None, test_file_path=None)`**: Create a new test\n- **`galtea.tests.get(test_id)`**: Retrieve a test by ID\n- **`galtea.tests.list(product_id, offset=None, limit=None)`**: List tests for a product\n- **`galtea.tests.download(test, output_dir)`**: Download test files in the selected directory.\n\n### Test Cases Management\n\n- **`galtea.test_cases.create(test_id, input, expected_output, context=None)`**: Create a new test case\n- **`galtea.test_cases.get(test_case_id)`**: Get a test case by ID\n- **`galtea.test_cases.list(test_id, offset=None, limit=None)`**: List test cases for a test\n- **`galtea.test_cases.delete(test_case_id)`**: Delete a test case by ID\n\n### Version Management\n\n- **`galtea.versions.create(product_id, name, description=None, ...)`**: Create a new product version\n- **`galtea.versions.get(version_id)`**: Get a version by ID\n- **`galtea.versions.list(product_id, offset=None, limit=None)`**: List versions for a product\n\n### Metric Management\n\n- **`galtea.metrics.create(name, criteria=None, evaluation_steps=None, evaluation_params=None)`**: Create a custom metric\n- **`galtea.metrics.get(metric_type_id)`**: Get a metric by ID\n- **`galtea.metrics.list(offset=None, limit=None)`**: List available metrics\n\n### Session Management\n\n- **`galtea.sessions.create(version_id, ...)`**: Create a new session to log a conversation.\n- **`galtea.sessions.get(session_id)`**: Get a session by ID.\n- **`galtea.sessions.list(version_id, ...)`**: List sessions for a version.\n- **`galtea.sessions.delete(session_id)`**: Delete a session by ID.\n\n### Inference Result Management\n\n- **`galtea.inference_results.create(session_id, input, output, ...)`**: Log a single turn in a conversation.\n- **`galtea.inference_results.get(inference_result_id)`**: Get an inference result by ID.\n- **`galtea.inference_results.list(session_id, ...)`**: List inference results for a session.\n- **`galtea.inference_results.delete(inference_result_id)`**: Delete an inference result by ID.\n\n### Evaluation Management\n\n- An `Evaluation` is created implicitly when you create evaluation tasks.\n- **`galtea.evaluations.get(evaluation_id)`**: Get an evaluation by ID\n- **`galtea.evaluations.list(product_id, offset=None, limit=None)`**: List evaluations for a product\n\n### Evaluation Tasks Management\n\n- **`galtea.evaluation_tasks.list(evaluation_id, offset=None, limit=None)`**: List tasks performed for an evaluation\n- **`galtea.evaluation_tasks.get(evaluation_task_id)`**: Get a specific task by ID\n- **`galtea.evaluation_tasks.create(metrics, session_id)`**: Create evaluation tasks for all inference results within a given session.\n- **`galtea.evaluation_tasks.create_single_turn(metrics, version_id, ...)`**: Create an evaluation task for a single-turn interaction, such as one based on a specific test case or a production query.\n\n## Getting Help\n\n- **Documentation**: [https://docs.galtea.ai/](https://docs.galtea.ai/)\n- **Support**: [support@galtea.ai](mailto:support@galtea.ai)\n\n## Authors\n\nThis software has been developed by the members of the product team of Galtea Solutions S.L.\n\n## License\n\nApache License 2.0\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Galtea software development kit",
"version": "2.4.0",
"project_urls": {
"Documentation": "https://docs.galtea.ai/",
"Homepage": "https://galtea.ai/",
"Repository": "https://github.com/Galtea-AI/galtea"
},
"split_keywords": [
"mlops",
" ml experiment tracking",
" ml model registry",
" ml model store",
" ml metadata store"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "213419f4d1744808db3fd538e06dc4502bedce310a715c5eeb85d85e176234aa",
"md5": "e6e0a5bec32f3922fe480b16cc3e6845",
"sha256": "52192d03ab4377de24923133026736529ee0862dc41e2b36a4e9b20c5ab9a131"
},
"downloads": -1,
"filename": "galtea-2.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e6e0a5bec32f3922fe480b16cc3e6845",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.9",
"size": 40313,
"upload_time": "2025-07-14T11:33:45",
"upload_time_iso_8601": "2025-07-14T11:33:45.182403Z",
"url": "https://files.pythonhosted.org/packages/21/34/19f4d1744808db3fd538e06dc4502bedce310a715c5eeb85d85e176234aa/galtea-2.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f7a5a6a58967f2ce8fa6fb5946e6fa6bdfe19c501cf3f87941ba8ed1e924e999",
"md5": "6866f552c01167101665d4ddf9fec36a",
"sha256": "ccab93755f76ac625dcb0316c0355eb74ad46305beba90bda7250b31a6e6f32b"
},
"downloads": -1,
"filename": "galtea-2.4.0.tar.gz",
"has_sig": false,
"md5_digest": "6866f552c01167101665d4ddf9fec36a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.9",
"size": 30986,
"upload_time": "2025-07-14T11:33:46",
"upload_time_iso_8601": "2025-07-14T11:33:46.073626Z",
"url": "https://files.pythonhosted.org/packages/f7/a5/a6a58967f2ce8fa6fb5946e6fa6bdfe19c501cf3f87941ba8ed1e924e999/galtea-2.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-14 11:33:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Galtea-AI",
"github_project": "galtea",
"github_not_found": true,
"lcname": "galtea"
}