# Ragatanga
[](https://opensource.org/licenses/MIT)
[](https://github.com/jquant/ragatanga/releases/tag/v0.3.1)
Ragatanga is a hybrid retrieval system that combines ontology-based reasoning with semantic search for powerful knowledge retrieval.
## Features
- **💪 Hybrid Retrieval**: Combines SPARQL queries against an ontology with semantic search for comprehensive knowledge retrieval
- **🧠 Adaptive Parameters**: Dynamically adjusts retrieval parameters based on query complexity and type
- **🔄 Multiple Embedding Providers**: Support for OpenAI, HuggingFace, and Sentence Transformers embeddings
- **💬 Multiple LLM Providers**: Support for OpenAI, HuggingFace, Ollama, and Anthropic LLMs
- **🌐 Comprehensive API**: FastAPI endpoints for querying and managing knowledge
- **📊 Confidence Scoring**: Ranks results with confidence scores for higher quality answers
- **🌍 Multilingual Support**: Translates queries to match your ontology's language
- **⚙️ Flexible Configuration**: Comprehensive configuration options through environment variables and config module
## Installation
```bash
# Install the latest version from PyPI
pip install ragatanga
# Install a specific version
pip install ragatanga==0.3.0
# Install from source
git clone https://github.com/yourusername/ragatanga.git
cd ragatanga
pip install -e .
```
## Quick Start
```python
import asyncio
from ragatanga.core.ontology import OntologyManager
from ragatanga.core.retrieval import AdaptiveRetriever
from ragatanga.core.query import generate_structured_answer
from ragatanga.config import ONTOLOGY_PATH, KNOWLEDGE_BASE_PATH
async def main():
# Initialize ontology manager with the sample ontology
# The package includes a sample ontology file that's loaded by default
ontology_manager = OntologyManager(ONTOLOGY_PATH)
await ontology_manager.load_and_materialize()
# Initialize adaptive retriever
retriever = AdaptiveRetriever(ontology_manager)
# Retrieve information for a query
query = "What classes are defined in the sample ontology?"
retrieved_texts, confidence_scores = await retriever.retrieve(query)
# Generate an answer
answer = await generate_structured_answer(query, retrieved_texts, confidence_scores)
# Print the answer
print(answer.answer)
# You can try additional queries related to the sample data
sample_queries = [
"What properties does Class1 have?",
"How many individuals are in the ontology?",
"Describe the relationship between Class1 and Class2"
]
for sample_query in sample_queries:
print(f"\nQuery: {sample_query}")
texts, scores = await retriever.retrieve(sample_query)
result = await generate_structured_answer(sample_query, texts, scores)
print(f"Answer: {result.answer}")
if __name__ == "__main__":
asyncio.run(main())
```
## API Usage
Start the API server (which will use the sample data files by default):
```bash
python -m ragatanga.main
```
Then, query the API with questions about the sample data:
```bash
# Query about the sample ontology classes
curl -X POST "http://localhost:8000/query" \
-H "Content-Type: application/json" \
-d '{"query": "What classes are defined in the sample ontology?"}'
# Query about sample ontology properties
curl -X POST "http://localhost:8000/query" \
-H "Content-Type: application/json" \
-d '{"query": "What properties does Class1 have?"}'
# Get statistics about the sample ontology
curl -X GET "http://localhost:8000/describe_ontology"
```
### API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/query` | POST | Process a natural language query |
| `/upload/ontology` | POST | Upload a new ontology file |
| `/download/ontology` | GET | Download the current ontology file |
| `/upload/kb` | POST | Upload a new knowledge base file |
| `/download/kb` | GET | Download the current knowledge base |
| `/describe_ontology` | GET | Get detailed statistics about the ontology |
## Configuration
Ragatanga can be configured through environment variables:
- `OPENAI_API_KEY`: Your OpenAI API key (required for OpenAI providers)
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required for Anthropic provider)
- `HF_API_KEY`: Your HuggingFace API key (required for HuggingFace API)
- `EMBEDDING_PROVIDER`: Embedding provider to use (openai, huggingface, sentence-transformers)
- `LLM_PROVIDER`: LLM provider to use (openai, huggingface, ollama, anthropic)
- `ONTOLOGY_PATH`: Path to your custom ontology file
- `KNOWLEDGE_BASE_PATH`: Path to your custom knowledge base file
## Sample Data
Ragatanga comes with sample data to help you get started immediately:
### Sample Ontology (`sample_ontology.ttl`)
A minimal ontology demonstrating the basic structure with:
- Classes representing key concepts
- Properties defining relationships between classes
- Individuals (instances) of the classes
- Labels and descriptions for improved readability
This sample ontology uses standard OWL/RDF patterns and can be used as a template for building your own domain-specific ontologies.
### Sample Knowledge Base (`sample_knowledge_base.md`)
A markdown file with text descriptions that complement the ontology:
- Detailed explanations of concepts
- Usage examples
- FAQs about the domain
- Additional unstructured information
This sample knowledge base demonstrates how to structure markdown for optimal chunking and retrieval.
You can replace these sample files with your own data by setting the `ONTOLOGY_PATH` and `KNOWLEDGE_BASE_PATH` environment variables.
## Architecture
Ragatanga's modular architecture includes:
- **Core**: Core functionality for ontology management, retrieval, and query processing
- `ontology.py`: Ontology loading, inference, and SPARQL query execution
- `semantic.py`: Semantic search using vector embeddings
- `retrieval.py`: Hybrid retrieval combining ontology and semantic search
- `query.py`: Query analysis and answer generation
- `llm.py`: Abstraction for different LLM providers
- **API**: FastAPI application and endpoints
- `app.py`: FastAPI application and lifecycle management
- `routes.py`: API endpoint definitions
- `models.py`: Pydantic models for request/response validation
- **Utils**: Utility functions for embeddings, SPARQL, and translation
- `embeddings.py`: Embedding providers and utilities
- `sparql.py`: SPARQL query generation and utilities
- `translation.py`: Multilingual support for query translation
- **System**: Configuration and version management
- `config.py`: System-wide configuration settings
- `_version.py`: Version tracking and information
## Advanced Usage
### Using Different Embedding Providers
```python
from ragatanga.utils.embeddings import EmbeddingProvider
from ragatanga.config import KNOWLEDGE_BASE_PATH
# Get a specific provider
embed_provider = EmbeddingProvider.get_provider("sentence-transformers")
# Embed a query about the sample data
query_embedding = await embed_provider.embed_query("What classes are defined in the sample ontology?")
# You can also embed the entire knowledge base
with open(KNOWLEDGE_BASE_PATH, "r") as f:
kb_text = f.read()
kb_embedding = await embed_provider.embed_query(kb_text)
```
Available embedding providers:
- `openai`: OpenAI's text-embedding models (requires `OPENAI_API_KEY`)
- `huggingface`: HuggingFace's embedding models (requires `HF_API_KEY`)
- `sentence-transformers`: SentenceTransformers embedding models (requires `SENTENCE_TRANSFORMERS_API_KEY`)
- `anthropic`: Anthropic's embedding models (requires `ANTHROPIC_API_KEY`)
### Using Different LLM Providers
```python
from ragatanga.core.llm import LLMProvider
from ragatanga.config import ONTOLOGY_PATH
import rdflib
# Get a specific provider
llm_provider = LLMProvider.get_provider("huggingface", model="mistralai/Mistral-7B-Instruct-v0.2")
# Load the sample ontology
g = rdflib.Graph()
g.parse(ONTOLOGY_PATH, format="turtle")
# Get all classes from the sample ontology
query = """
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?class ?label
WHERE {
?class a owl:Class .
OPTIONAL { ?class rdfs:label ?label }
}
"""
results = g.query(query)
# Generate a summary of the classes in the sample ontology
classes_text = "\n".join([f"Class: {str(row.class)}, Label: {str(row.label)}" for row in results])
system_prompt = "You are an ontology expert helping to document an OWL ontology."
prompt = f"Here are the classes in our sample ontology:\n{classes_text}\n\nPlease generate a brief summary of these classes."
response = await llm_provider.generate_text(
prompt=prompt,
system_prompt=system_prompt
)
```
### Customizing Ontology Management
```python
from ragatanga.core.ontology import OntologyManager, extract_relevant_schema
from ragatanga.config import ONTOLOGY_PATH
# Initialize with the sample ontology file
manager = OntologyManager(ONTOLOGY_PATH)
# Load and materialize inferences
await manager.load_and_materialize()
# Get statistics about the sample ontology
stats = manager.get_ontology_statistics()
print(f"Classes: {stats['statistics']['total_classes']}")
print(f"Individuals: {stats['statistics']['total_individuals']}")
print(f"Properties: {stats['statistics']['total_properties']}")
# Execute a SPARQL query against the sample ontology
results = await manager.execute_sparql("""
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?class ?label
WHERE {
?class a owl:Class ;
rdfs:label ?label .
}
""")
# Extract schema relevant to a query using the sample ontology
schema = await extract_relevant_schema("What are the properties of Class1?", ONTOLOGY_PATH)
```
### Working with Knowledge Bases
Ragatanga can use text knowledge bases in addition to ontologies:
```python
from ragatanga.core.semantic import SemanticSearch
from ragatanga.config import KNOWLEDGE_BASE_PATH
# Initialize semantic search
semantic_search = SemanticSearch()
# Load the sample knowledge base file
await semantic_search.load_knowledge_base(KNOWLEDGE_BASE_PATH)
# Search the sample knowledge base for information
results = await semantic_search.search("What information is available about sample topics?", k=5)
# Search with similarity scores
results, scores = await semantic_search.search_with_scores("Tell me about the sample classes", k=5)
# Print the results
for i, (text, score) in enumerate(zip(results, scores)):
print(f"Result {i+1} (Confidence: {score:.2f}):")
print(f"{text}\n")
```
## Knowledge Base Format
Ragatanga accepts knowledge base files in markdown format, with entries separated by blank lines:
```markdown
# Entity Name 1
This is information about Entity 1. The system will chunk this
content for retrieval later.
# Entity Name 2
This is information about Entity 2. Each chunk will be embedded
and retrievable through semantic search.
```
## Ontology Format
Ragatanga works with OWL/RDF ontologies in Turtle (.ttl) format:
```turtle
@prefix : <http://example.org/ontology#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
:Entity1 rdf:type owl:Class ;
rdfs:label "Entity 1" .
:property1 rdf:type owl:ObjectProperty ;
rdfs:domain :Entity1 ;
rdfs:range :Entity2 .
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "ragatanga",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "semantic, knowledge-base, ontology, reasoning, rag",
"author": null,
"author_email": "Dionisio Agourakis <dionisio@getjai.com>",
"download_url": "https://files.pythonhosted.org/packages/78/8d/20054fd8ee0db919f8dc7bd82553088ca452522554b2ffc352a50f2c6156/ragatanga-0.3.1.tar.gz",
"platform": null,
"description": "# Ragatanga\n\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/jquant/ragatanga/releases/tag/v0.3.1)\n\nRagatanga is a hybrid retrieval system that combines ontology-based reasoning with semantic search for powerful knowledge retrieval.\n\n## Features\n\n- **\ud83d\udcaa Hybrid Retrieval**: Combines SPARQL queries against an ontology with semantic search for comprehensive knowledge retrieval\n- **\ud83e\udde0 Adaptive Parameters**: Dynamically adjusts retrieval parameters based on query complexity and type\n- **\ud83d\udd04 Multiple Embedding Providers**: Support for OpenAI, HuggingFace, and Sentence Transformers embeddings\n- **\ud83d\udcac Multiple LLM Providers**: Support for OpenAI, HuggingFace, Ollama, and Anthropic LLMs\n- **\ud83c\udf10 Comprehensive API**: FastAPI endpoints for querying and managing knowledge\n- **\ud83d\udcca Confidence Scoring**: Ranks results with confidence scores for higher quality answers\n- **\ud83c\udf0d Multilingual Support**: Translates queries to match your ontology's language\n- **\u2699\ufe0f Flexible Configuration**: Comprehensive configuration options through environment variables and config module\n\n## Installation\n\n```bash\n# Install the latest version from PyPI\npip install ragatanga\n\n# Install a specific version\npip install ragatanga==0.3.0\n\n# Install from source\ngit clone https://github.com/yourusername/ragatanga.git\ncd ragatanga\npip install -e .\n```\n\n## Quick Start\n\n```python\nimport asyncio\nfrom ragatanga.core.ontology import OntologyManager\nfrom ragatanga.core.retrieval import AdaptiveRetriever\nfrom ragatanga.core.query import generate_structured_answer\nfrom ragatanga.config import ONTOLOGY_PATH, KNOWLEDGE_BASE_PATH\n\nasync def main():\n # Initialize ontology manager with the sample ontology\n # The package includes a sample ontology file that's loaded by default\n ontology_manager = OntologyManager(ONTOLOGY_PATH)\n await ontology_manager.load_and_materialize()\n \n # Initialize adaptive retriever\n retriever = AdaptiveRetriever(ontology_manager)\n \n # Retrieve information for a query\n query = \"What classes are defined in the sample ontology?\"\n retrieved_texts, confidence_scores = await retriever.retrieve(query)\n \n # Generate an answer\n answer = await generate_structured_answer(query, retrieved_texts, confidence_scores)\n \n # Print the answer\n print(answer.answer)\n \n # You can try additional queries related to the sample data\n sample_queries = [\n \"What properties does Class1 have?\",\n \"How many individuals are in the ontology?\",\n \"Describe the relationship between Class1 and Class2\"\n ]\n \n for sample_query in sample_queries:\n print(f\"\\nQuery: {sample_query}\")\n texts, scores = await retriever.retrieve(sample_query)\n result = await generate_structured_answer(sample_query, texts, scores)\n print(f\"Answer: {result.answer}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n## API Usage\n\nStart the API server (which will use the sample data files by default):\n\n```bash\npython -m ragatanga.main\n```\n\nThen, query the API with questions about the sample data:\n\n```bash\n# Query about the sample ontology classes\ncurl -X POST \"http://localhost:8000/query\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"What classes are defined in the sample ontology?\"}'\n\n# Query about sample ontology properties\ncurl -X POST \"http://localhost:8000/query\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"What properties does Class1 have?\"}'\n\n# Get statistics about the sample ontology\ncurl -X GET \"http://localhost:8000/describe_ontology\"\n```\n\n### API Endpoints\n\n| Endpoint | Method | Description |\n|----------|--------|-------------|\n| `/query` | POST | Process a natural language query |\n| `/upload/ontology` | POST | Upload a new ontology file |\n| `/download/ontology` | GET | Download the current ontology file |\n| `/upload/kb` | POST | Upload a new knowledge base file |\n| `/download/kb` | GET | Download the current knowledge base |\n| `/describe_ontology` | GET | Get detailed statistics about the ontology |\n\n## Configuration\n\nRagatanga can be configured through environment variables:\n\n- `OPENAI_API_KEY`: Your OpenAI API key (required for OpenAI providers)\n- `ANTHROPIC_API_KEY`: Your Anthropic API key (required for Anthropic provider)\n- `HF_API_KEY`: Your HuggingFace API key (required for HuggingFace API)\n- `EMBEDDING_PROVIDER`: Embedding provider to use (openai, huggingface, sentence-transformers)\n- `LLM_PROVIDER`: LLM provider to use (openai, huggingface, ollama, anthropic)\n- `ONTOLOGY_PATH`: Path to your custom ontology file\n- `KNOWLEDGE_BASE_PATH`: Path to your custom knowledge base file\n\n## Sample Data\n\nRagatanga comes with sample data to help you get started immediately:\n\n### Sample Ontology (`sample_ontology.ttl`)\n\nA minimal ontology demonstrating the basic structure with:\n- Classes representing key concepts\n- Properties defining relationships between classes\n- Individuals (instances) of the classes\n- Labels and descriptions for improved readability\n\nThis sample ontology uses standard OWL/RDF patterns and can be used as a template for building your own domain-specific ontologies.\n\n### Sample Knowledge Base (`sample_knowledge_base.md`)\n\nA markdown file with text descriptions that complement the ontology:\n- Detailed explanations of concepts\n- Usage examples\n- FAQs about the domain\n- Additional unstructured information\n\nThis sample knowledge base demonstrates how to structure markdown for optimal chunking and retrieval.\n\nYou can replace these sample files with your own data by setting the `ONTOLOGY_PATH` and `KNOWLEDGE_BASE_PATH` environment variables.\n\n## Architecture\n\nRagatanga's modular architecture includes:\n\n- **Core**: Core functionality for ontology management, retrieval, and query processing\n - `ontology.py`: Ontology loading, inference, and SPARQL query execution\n - `semantic.py`: Semantic search using vector embeddings\n - `retrieval.py`: Hybrid retrieval combining ontology and semantic search\n - `query.py`: Query analysis and answer generation\n - `llm.py`: Abstraction for different LLM providers\n\n- **API**: FastAPI application and endpoints\n - `app.py`: FastAPI application and lifecycle management\n - `routes.py`: API endpoint definitions\n - `models.py`: Pydantic models for request/response validation\n\n- **Utils**: Utility functions for embeddings, SPARQL, and translation\n - `embeddings.py`: Embedding providers and utilities\n - `sparql.py`: SPARQL query generation and utilities\n - `translation.py`: Multilingual support for query translation\n\n- **System**: Configuration and version management\n - `config.py`: System-wide configuration settings\n - `_version.py`: Version tracking and information\n\n## Advanced Usage\n\n### Using Different Embedding Providers\n\n```python\nfrom ragatanga.utils.embeddings import EmbeddingProvider\nfrom ragatanga.config import KNOWLEDGE_BASE_PATH\n\n# Get a specific provider\nembed_provider = EmbeddingProvider.get_provider(\"sentence-transformers\")\n\n# Embed a query about the sample data\nquery_embedding = await embed_provider.embed_query(\"What classes are defined in the sample ontology?\")\n\n# You can also embed the entire knowledge base\nwith open(KNOWLEDGE_BASE_PATH, \"r\") as f:\n kb_text = f.read()\n kb_embedding = await embed_provider.embed_query(kb_text)\n```\n\nAvailable embedding providers:\n- `openai`: OpenAI's text-embedding models (requires `OPENAI_API_KEY`)\n- `huggingface`: HuggingFace's embedding models (requires `HF_API_KEY`)\n- `sentence-transformers`: SentenceTransformers embedding models (requires `SENTENCE_TRANSFORMERS_API_KEY`)\n- `anthropic`: Anthropic's embedding models (requires `ANTHROPIC_API_KEY`)\n### Using Different LLM Providers\n\n```python\nfrom ragatanga.core.llm import LLMProvider\nfrom ragatanga.config import ONTOLOGY_PATH\nimport rdflib\n\n# Get a specific provider\nllm_provider = LLMProvider.get_provider(\"huggingface\", model=\"mistralai/Mistral-7B-Instruct-v0.2\")\n\n# Load the sample ontology\ng = rdflib.Graph()\ng.parse(ONTOLOGY_PATH, format=\"turtle\")\n\n# Get all classes from the sample ontology\nquery = \"\"\"\nPREFIX owl: <http://www.w3.org/2002/07/owl#>\nPREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\nSELECT ?class ?label\nWHERE {\n ?class a owl:Class .\n OPTIONAL { ?class rdfs:label ?label }\n}\n\"\"\"\nresults = g.query(query)\n\n# Generate a summary of the classes in the sample ontology\nclasses_text = \"\\n\".join([f\"Class: {str(row.class)}, Label: {str(row.label)}\" for row in results])\nsystem_prompt = \"You are an ontology expert helping to document an OWL ontology.\"\nprompt = f\"Here are the classes in our sample ontology:\\n{classes_text}\\n\\nPlease generate a brief summary of these classes.\"\n\nresponse = await llm_provider.generate_text(\n prompt=prompt,\n system_prompt=system_prompt\n)\n```\n\n### Customizing Ontology Management\n\n```python\nfrom ragatanga.core.ontology import OntologyManager, extract_relevant_schema\nfrom ragatanga.config import ONTOLOGY_PATH\n\n# Initialize with the sample ontology file\nmanager = OntologyManager(ONTOLOGY_PATH)\n\n# Load and materialize inferences\nawait manager.load_and_materialize()\n\n# Get statistics about the sample ontology\nstats = manager.get_ontology_statistics()\nprint(f\"Classes: {stats['statistics']['total_classes']}\")\nprint(f\"Individuals: {stats['statistics']['total_individuals']}\")\nprint(f\"Properties: {stats['statistics']['total_properties']}\")\n\n# Execute a SPARQL query against the sample ontology\nresults = await manager.execute_sparql(\"\"\"\n PREFIX owl: <http://www.w3.org/2002/07/owl#>\n PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\n SELECT ?class ?label\n WHERE {\n ?class a owl:Class ;\n rdfs:label ?label .\n }\n\"\"\")\n\n# Extract schema relevant to a query using the sample ontology\nschema = await extract_relevant_schema(\"What are the properties of Class1?\", ONTOLOGY_PATH)\n```\n\n### Working with Knowledge Bases\n\nRagatanga can use text knowledge bases in addition to ontologies:\n\n```python\nfrom ragatanga.core.semantic import SemanticSearch\nfrom ragatanga.config import KNOWLEDGE_BASE_PATH\n\n# Initialize semantic search\nsemantic_search = SemanticSearch()\n\n# Load the sample knowledge base file\nawait semantic_search.load_knowledge_base(KNOWLEDGE_BASE_PATH)\n\n# Search the sample knowledge base for information\nresults = await semantic_search.search(\"What information is available about sample topics?\", k=5)\n\n# Search with similarity scores\nresults, scores = await semantic_search.search_with_scores(\"Tell me about the sample classes\", k=5)\n\n# Print the results\nfor i, (text, score) in enumerate(zip(results, scores)):\n print(f\"Result {i+1} (Confidence: {score:.2f}):\")\n print(f\"{text}\\n\")\n```\n\n## Knowledge Base Format\n\nRagatanga accepts knowledge base files in markdown format, with entries separated by blank lines:\n\n```markdown\n# Entity Name 1\n\nThis is information about Entity 1. The system will chunk this\ncontent for retrieval later.\n\n# Entity Name 2\n\nThis is information about Entity 2. Each chunk will be embedded\nand retrievable through semantic search.\n```\n\n## Ontology Format\n\nRagatanga works with OWL/RDF ontologies in Turtle (.ttl) format:\n\n```turtle\n@prefix : <http://example.org/ontology#> .\n@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .\n@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .\n@prefix owl: <http://www.w3.org/2002/07/owl#> .\n\n:Entity1 rdf:type owl:Class ;\n rdfs:label \"Entity 1\" .\n\n:property1 rdf:type owl:ObjectProperty ;\n rdfs:domain :Entity1 ;\n rdfs:range :Entity2 .\n```\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A hybrid semantic knowledge base and query system combining ontology-based reasoning with semantic search",
"version": "0.3.1",
"project_urls": {
"Bug Tracker": "https://github.com/jquant/ragatanga/issues",
"Homepage": "https://github.com/jquant/ragatanga"
},
"split_keywords": [
"semantic",
" knowledge-base",
" ontology",
" reasoning",
" rag"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d24cb416b24c03cb281600b359604b9e68205264c0ed704e0931f21c22d7daa8",
"md5": "2c55e2ce8646527cbcce03e1cd564e24",
"sha256": "06bf583dc195a56521aa656d68c80a7456aaef16ffebfab74ccca145d8567324"
},
"downloads": -1,
"filename": "ragatanga-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2c55e2ce8646527cbcce03e1cd564e24",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 83148,
"upload_time": "2025-02-25T22:38:47",
"upload_time_iso_8601": "2025-02-25T22:38:47.797195Z",
"url": "https://files.pythonhosted.org/packages/d2/4c/b416b24c03cb281600b359604b9e68205264c0ed704e0931f21c22d7daa8/ragatanga-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "788d20054fd8ee0db919f8dc7bd82553088ca452522554b2ffc352a50f2c6156",
"md5": "0e8e326021a28f08b8b665f0b9bf48cd",
"sha256": "083149d4d44336c1dc7edd58ba9d52f01770d35fb047c2f23293ae9f334762d2"
},
"downloads": -1,
"filename": "ragatanga-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "0e8e326021a28f08b8b665f0b9bf48cd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 83855,
"upload_time": "2025-02-25T22:38:50",
"upload_time_iso_8601": "2025-02-25T22:38:50.117872Z",
"url": "https://files.pythonhosted.org/packages/78/8d/20054fd8ee0db919f8dc7bd82553088ca452522554b2ffc352a50f2c6156/ragatanga-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-25 22:38:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jquant",
"github_project": "ragatanga",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "aiofiles",
"specs": [
[
"==",
"24.1.0"
]
]
},
{
"name": "instructor",
"specs": [
[
"==",
"1.7.2"
]
]
},
{
"name": "faiss-cpu",
"specs": [
[
"==",
"1.10.0"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.2.3"
]
]
},
{
"name": "openai",
"specs": [
[
"==",
"1.63.2"
]
]
},
{
"name": "uvicorn",
"specs": [
[
"==",
"0.34.0"
]
]
},
{
"name": "fastapi",
"specs": [
[
"==",
"0.115.8"
]
]
},
{
"name": "rdflib",
"specs": [
[
"==",
"7.1.3"
]
]
},
{
"name": "owlready2",
"specs": [
[
"==",
"0.47"
]
]
},
{
"name": "python-multipart",
"specs": [
[
"==",
"0.0.20"
]
]
},
{
"name": "loguru",
"specs": [
[
"==",
"0.7.3"
]
]
}
],
"lcname": "ragatanga"
}