# Grafa
<p align="center">
<em>Knowledge Graph Generation Library</em>
</p>
[](https://github.com/codingmaster8/grafa/actions)
[](https://codecov.io/gh/codingmaster8/grafa)
[](https://badge.fury.io/py/grafa)
---

**Documentation**: <a href="https://pablo.vargas.github.io/grafa/" target="_blank">https://codingmaster8.github.io/grafa/</a>
**Source Code**: <a href="https://github.com/codingmaster8/grafa" target="_blank">https://github.com/codingmaster8/grafa</a>
---
## What is Grafa?
Grafa is a comprehensive Python library for building, managing, and querying knowledge graphs. It provides an end-to-end solution for:
- **Document Ingestion**: Upload and process documents (text files, PDFs, etc.)
- **Intelligent Chunking**: Break documents into meaningful chunks using agentic chunking strategies
- **Entity Extraction**: Automatically extract entities and relationships from text using LLMs
- **Knowledge Graph Construction**: Build structured knowledge graphs in Neo4j
- **Smart Search**: Perform semantic, text-based, and hybrid searches across your knowledge base
- **Deduplication**: Automatically merge similar entities to maintain graph quality

## Key Features
### 🚀 Easy Setup
- Schema-driven approach using YAML configuration
- Automatic Neo4j index creation (vector and text indexes)
- Built-in support for AWS S3 storage and local file storage
### 🧠 AI-Powered Processing
- LLM-based entity and relationship extraction
- Semantic similarity search using embeddings
- Intelligent entity deduplication and merging
### 🔍 Advanced Search Capabilities
- **Semantic Search**: Vector-based similarity search
- **Text Search**: Full-text search with fuzzy matching
- **Hybrid Search**: Combines semantic and text approaches
- **Name Matching**: Edit distance-based name matching
### 📊 Flexible Node Types
- Built-in node types: Documents, Chunks, Document History
- Custom node types defined via YAML schema
- Support for metadata, embeddings, and relationships
## Installation
```bash
pip install grafa
```
## Quick Start
### 1. Define Your Schema
Create a YAML file ([schema.yaml](schema.yaml)) to define your knowledge graph structure:
```yaml
database:
name: "Business Concepts"
description: "A knowledge graph for business concepts"
node_types:
Person:
description: "A person"
fields:
occupation:
type: STRING
description: "Occupation of the person"
options:
link_to_chunk: false
embed: false
Company:
description: "A company"
fields:
description:
type: STRING
description: "Description of the company"
options:
link_to_chunk: false
embed: false
Concept:
description: "A business concept"
fields:
description:
type: STRING
description: "Description of the concept"
options:
link_to_chunk: true
semantic_search: true
text_search: true
relationships:
- from_type: Person
to_type: Company
relationship_type: WORKS_AT
description: "A person works at a company"
- from_type: Company
to_type: Concept
relationship_type: IS_RELATED_TO
description: "A company is related to a concept"
```
### 2. Initialize the Client
```python
from grafa import GrafaClient
# Create client from YAML schema
client = await GrafaClient.from_yaml(
yaml_path="schema.yaml",
db_name="my_knowledge_base"
)
# Or connect to existing database
client = await GrafaClient.create(db_name="existing_db")
```
### 3. Ingest Documents
```python
# Upload and process a document
document, chunks, entities, relationships = await client.ingest_file(
document_name="business_guide",
document_path="path/to/document.txt",
context="Business processes and concepts",
author="John Doe",
max_token_chunk_size=500,
deduplication_similarity_threshold=0.6
)
print(f"Created {len(chunks)} chunks")
print(f"Extracted {sum(len(e) for e in entities)} entities")
```
### 4. Search Your Knowledge Base
```python
# Semantic search
results = await client.similarity_search(
query="What is revenue management?",
node_types=["Concept"],
search_mode="semantic",
limit=10
)
# Hybrid search (semantic + text)
results = await client.similarity_search(
query="company revenue strategies",
search_mode="hybrid",
semantic_threshold=0.7,
text_threshold=0.5
)
# Knowledge base query (returns formatted context)
answer = await client.knowledgebase_query(
query="How do we measure promotional effectiveness?",
max_hops=2,
return_formatted=True
)
print(answer)
```
## Configuration
### Environment Variables
Set these environment variables for database and storage configuration:
```bash
# Neo4j Configuration
export GRAFA_URI="neo4j+s://your-database.neo4j.io"
export GRAFA_USERNAME="neo4j"
export GRAFA_PASSWORD="your-password"
# Storage Configuration (choose one)
export GRAFA_S3_BUCKET="your-s3-bucket" # For S3 storage
export GRAFA_LOCAL_STORAGE_PATH="/local/path" # For local storage
```
### Custom Configuration
```python
from grafa import GrafaClient, GrafaConfig
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
# Create custom configuration
config = await GrafaConfig.create(
embedding_model=OpenAIEmbeddings(model="text-embedding-3-small"),
embedding_dimension=1536,
llm=ChatOpenAI(model="gpt-4"),
s3_bucket="my-documents-bucket"
)
client = await GrafaClient.create(
db_name="my_db",
grafa_config=config
)
```
## Schema Definition
### Node Types
Define custom node types with fields and options:
```yaml
node_types:
Product:
description: "A product in our catalog"
fields:
price:
type: FLOAT
description: "Product price"
category:
type: STRING
description: "Product category"
features:
type: LIST
description: "List of product features"
options:
link_to_chunk: true # Link to source chunks
semantic_search: true # Enable vector search
text_search: true # Enable full-text search
unique_name: true # Enforce unique names
```
### Field Types
- `STRING`: Text fields
- `INTEGER`: Numeric integers
- `FLOAT`: Numeric floats
- `BOOLEAN`: True/false values
- `LIST`: Arrays of values
- `DATETIME`: Date and time values
### Node Options
- `link_to_chunk`: Whether nodes link back to source chunks
- `semantic_search`: Enable vector-based semantic search
- `text_search`: Enable full-text search indexing
- `unique_name`: Enforce unique names for this node type
- `embed`: Whether to generate embeddings for this node type
## Advanced Features
### Entity Deduplication
Grafa automatically deduplicates similar entities during ingestion:
```python
# Configure deduplication thresholds
await client.ingest_file(
document_name="document.txt",
deduplication_similarity_threshold=0.8, # Semantic similarity
deduplication_text_threshold=0.6, # Text similarity
deduplication_word_edit_distance=3 # Name edit distance
)
```
### Custom Chunking
Use different chunking strategies:
```python
from grafa.document.chunking import agentic_chunking
# Create document first
document = await client.upload_file(
document_name="guide.txt",
document_path="path/to/guide.txt"
)
# Custom chunking with specific parameters
chunks = await client.chunk_document(
document,
max_token_chunk_size=800,
verbose=True,
output_language="en"
)
```
### Search Modes
Different search strategies for different use cases:
```python
# Pure semantic search (vector embeddings)
semantic_results = await client.similarity_search(
query="machine learning algorithms",
search_mode="semantic",
semantic_threshold=0.75
)
# Pure text search (full-text index)
text_results = await client.similarity_search(
query="revenue management strategies",
search_mode="text",
text_threshold=0.6
)
# Hybrid search (combines both)
hybrid_results = await client.similarity_search(
query="customer segmentation",
search_mode="hybrid",
semantic_threshold=0.7,
text_threshold=0.5
)
# Automatic mode (uses available indexes)
auto_results = await client.similarity_search(
query="business metrics",
search_mode="allowed" # Default
)
```
## Examples
The [examples/](examples/) directory contains comprehensive examples:
- [`client.ipynb`](examples/client.ipynb): Basic client usage
- [`graphrag.ipynb`](examples/graphrag.ipynb): Complete GraphRAG implementation
- [`search.ipynb`](examples/search.ipynb): Advanced search examples
- [`chunking.ipynb`](examples/chunking.ipynb): Document chunking strategies
- [`database_info.ipynb`](examples/database_info.ipynb): Database schema exploration
## Core Components
### GrafaClient
The main interface for all operations ([grafa/client.py](grafa/client.py)):
- Document ingestion and processing
- Entity extraction and relationship building
- Search and retrieval operations
- Database management
### Node Types
Built-in node types ([grafa/models.py](grafa/models.py)):
- **GrafaDocument**: Represents uploaded documents
- **GrafaChunk**: Document chunks with content and metadata
- **GrafaDocumentHistory**: Version history for documents
- **GrafaDatabase**: Database schema and configuration
### Dynamic Models
Custom node types generated from YAML ([grafa/dynamic_models.py](grafa/dynamic_models.py)):
- Runtime model creation from schema
- Automatic relationship validation
- Field type mapping and validation
## Development
### Setup environment
We use [Hatch](https://hatch.pypa.io/latest/install/) to manage the development environment and production build. Ensure it's installed on your system.
### Run unit tests
You can run all the tests with:
```bash
hatch run test
```
### Format the code
Execute the following command to apply linting and check typing:
```bash
hatch run lint
```
### Publish a new version
You can bump the version, create a commit and associated tag with one command:
```bash
hatch version patch
```
```bash
hatch version minor
```
```bash
hatch version major
```
Your default Git text editor will open so you can add information about the release.
When you push the tag on GitHub, the workflow will automatically publish it on PyPi and a GitHub release will be created as draft.
## Serve the documentation
You can serve the Mkdocs documentation with:
```bash
hatch run docs-serve
```
It'll automatically watch for changes in your code.
## Requirements
- Python 3.8+
- Neo4j database (local or cloud)
- OpenAI API key (for embeddings and LLM operations)
- AWS credentials (if using S3 storage)
## License
This project is licensed under the terms of MIT License.
Raw data
{
"_id": null,
"home_page": null,
"name": "grafa",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "embeddings, entity-extraction, graph-database, graph-rag, knowledge-graph, langchain, neo4j, nlp, rag",
"author": null,
"author_email": "Pablo Vargas <pablov.c8@hotmail.com>",
"download_url": "https://files.pythonhosted.org/packages/05/5c/9403046561339e64a967b50423c0e2c9bc27de27363e4e2545e424e0fa89/grafa-0.1.0.tar.gz",
"platform": null,
"description": "# Grafa\n\n<p align=\"center\">\n <em>Knowledge Graph Generation Library</em>\n</p>\n\n[](https://github.com/codingmaster8/grafa/actions)\n[](https://codecov.io/gh/codingmaster8/grafa)\n[](https://badge.fury.io/py/grafa)\n\n---\n\n\n**Documentation**: <a href=\"https://pablo.vargas.github.io/grafa/\" target=\"_blank\">https://codingmaster8.github.io/grafa/</a>\n\n**Source Code**: <a href=\"https://github.com/codingmaster8/grafa\" target=\"_blank\">https://github.com/codingmaster8/grafa</a>\n\n---\n\n## What is Grafa?\n\nGrafa is a comprehensive Python library for building, managing, and querying knowledge graphs. It provides an end-to-end solution for:\n\n- **Document Ingestion**: Upload and process documents (text files, PDFs, etc.)\n- **Intelligent Chunking**: Break documents into meaningful chunks using agentic chunking strategies\n- **Entity Extraction**: Automatically extract entities and relationships from text using LLMs\n- **Knowledge Graph Construction**: Build structured knowledge graphs in Neo4j\n- **Smart Search**: Perform semantic, text-based, and hybrid searches across your knowledge base\n- **Deduplication**: Automatically merge similar entities to maintain graph quality\n \n\n\n## Key Features\n\n### \ud83d\ude80 Easy Setup\n- Schema-driven approach using YAML configuration\n- Automatic Neo4j index creation (vector and text indexes)\n- Built-in support for AWS S3 storage and local file storage\n\n### \ud83e\udde0 AI-Powered Processing\n- LLM-based entity and relationship extraction\n- Semantic similarity search using embeddings\n- Intelligent entity deduplication and merging\n\n### \ud83d\udd0d Advanced Search Capabilities\n- **Semantic Search**: Vector-based similarity search\n- **Text Search**: Full-text search with fuzzy matching\n- **Hybrid Search**: Combines semantic and text approaches\n- **Name Matching**: Edit distance-based name matching\n\n### \ud83d\udcca Flexible Node Types\n- Built-in node types: Documents, Chunks, Document History\n- Custom node types defined via YAML schema\n- Support for metadata, embeddings, and relationships\n\n## Installation\n\n```bash\npip install grafa\n```\n\n## Quick Start\n\n### 1. Define Your Schema\n\nCreate a YAML file ([schema.yaml](schema.yaml)) to define your knowledge graph structure:\n\n```yaml\ndatabase:\n name: \"Business Concepts\"\n description: \"A knowledge graph for business concepts\"\n\nnode_types:\n Person:\n description: \"A person\"\n fields:\n occupation:\n type: STRING\n description: \"Occupation of the person\"\n options:\n link_to_chunk: false\n embed: false\n\n Company:\n description: \"A company\"\n fields:\n description:\n type: STRING\n description: \"Description of the company\"\n options:\n link_to_chunk: false\n embed: false\n\n Concept:\n description: \"A business concept\"\n fields:\n description:\n type: STRING\n description: \"Description of the concept\"\n options:\n link_to_chunk: true\n semantic_search: true\n text_search: true\n\nrelationships:\n - from_type: Person\n to_type: Company\n relationship_type: WORKS_AT\n description: \"A person works at a company\"\n \n - from_type: Company\n to_type: Concept\n relationship_type: IS_RELATED_TO\n description: \"A company is related to a concept\"\n```\n\n### 2. Initialize the Client\n\n```python\nfrom grafa import GrafaClient\n\n# Create client from YAML schema\nclient = await GrafaClient.from_yaml(\n yaml_path=\"schema.yaml\",\n db_name=\"my_knowledge_base\"\n)\n\n# Or connect to existing database\nclient = await GrafaClient.create(db_name=\"existing_db\")\n```\n\n### 3. Ingest Documents\n\n```python\n# Upload and process a document\ndocument, chunks, entities, relationships = await client.ingest_file(\n document_name=\"business_guide\",\n document_path=\"path/to/document.txt\",\n context=\"Business processes and concepts\",\n author=\"John Doe\",\n max_token_chunk_size=500,\n deduplication_similarity_threshold=0.6\n)\n\nprint(f\"Created {len(chunks)} chunks\")\nprint(f\"Extracted {sum(len(e) for e in entities)} entities\")\n```\n\n### 4. Search Your Knowledge Base\n\n```python\n# Semantic search\nresults = await client.similarity_search(\n query=\"What is revenue management?\",\n node_types=[\"Concept\"],\n search_mode=\"semantic\",\n limit=10\n)\n\n# Hybrid search (semantic + text)\nresults = await client.similarity_search(\n query=\"company revenue strategies\",\n search_mode=\"hybrid\",\n semantic_threshold=0.7,\n text_threshold=0.5\n)\n\n# Knowledge base query (returns formatted context)\nanswer = await client.knowledgebase_query(\n query=\"How do we measure promotional effectiveness?\",\n max_hops=2,\n return_formatted=True\n)\nprint(answer)\n```\n\n## Configuration\n\n### Environment Variables\n\nSet these environment variables for database and storage configuration:\n\n```bash\n# Neo4j Configuration\nexport GRAFA_URI=\"neo4j+s://your-database.neo4j.io\"\nexport GRAFA_USERNAME=\"neo4j\"\nexport GRAFA_PASSWORD=\"your-password\"\n\n# Storage Configuration (choose one)\nexport GRAFA_S3_BUCKET=\"your-s3-bucket\" # For S3 storage\nexport GRAFA_LOCAL_STORAGE_PATH=\"/local/path\" # For local storage\n```\n\n### Custom Configuration\n\n```python\nfrom grafa import GrafaClient, GrafaConfig\nfrom langchain_openai import OpenAIEmbeddings, ChatOpenAI\n\n# Create custom configuration\nconfig = await GrafaConfig.create(\n embedding_model=OpenAIEmbeddings(model=\"text-embedding-3-small\"),\n embedding_dimension=1536,\n llm=ChatOpenAI(model=\"gpt-4\"),\n s3_bucket=\"my-documents-bucket\"\n)\n\nclient = await GrafaClient.create(\n db_name=\"my_db\",\n grafa_config=config\n)\n```\n\n## Schema Definition\n\n### Node Types\n\nDefine custom node types with fields and options:\n\n```yaml\nnode_types:\n Product:\n description: \"A product in our catalog\"\n fields:\n price:\n type: FLOAT\n description: \"Product price\"\n category:\n type: STRING\n description: \"Product category\"\n features:\n type: LIST\n description: \"List of product features\"\n options:\n link_to_chunk: true # Link to source chunks\n semantic_search: true # Enable vector search\n text_search: true # Enable full-text search\n unique_name: true # Enforce unique names\n```\n\n### Field Types\n- `STRING`: Text fields\n- `INTEGER`: Numeric integers\n- `FLOAT`: Numeric floats\n- `BOOLEAN`: True/false values\n- `LIST`: Arrays of values\n- `DATETIME`: Date and time values\n\n### Node Options\n- `link_to_chunk`: Whether nodes link back to source chunks\n- `semantic_search`: Enable vector-based semantic search\n- `text_search`: Enable full-text search indexing\n- `unique_name`: Enforce unique names for this node type\n- `embed`: Whether to generate embeddings for this node type\n\n## Advanced Features\n\n### Entity Deduplication\n\nGrafa automatically deduplicates similar entities during ingestion:\n\n```python\n# Configure deduplication thresholds\nawait client.ingest_file(\n document_name=\"document.txt\",\n deduplication_similarity_threshold=0.8, # Semantic similarity\n deduplication_text_threshold=0.6, # Text similarity\n deduplication_word_edit_distance=3 # Name edit distance\n)\n```\n\n### Custom Chunking\n\nUse different chunking strategies:\n\n```python\nfrom grafa.document.chunking import agentic_chunking\n\n# Create document first\ndocument = await client.upload_file(\n document_name=\"guide.txt\",\n document_path=\"path/to/guide.txt\"\n)\n\n# Custom chunking with specific parameters\nchunks = await client.chunk_document(\n document,\n max_token_chunk_size=800,\n verbose=True,\n output_language=\"en\"\n)\n```\n\n### Search Modes\n\nDifferent search strategies for different use cases:\n\n```python\n# Pure semantic search (vector embeddings)\nsemantic_results = await client.similarity_search(\n query=\"machine learning algorithms\",\n search_mode=\"semantic\",\n semantic_threshold=0.75\n)\n\n# Pure text search (full-text index)\ntext_results = await client.similarity_search(\n query=\"revenue management strategies\",\n search_mode=\"text\",\n text_threshold=0.6\n)\n\n# Hybrid search (combines both)\nhybrid_results = await client.similarity_search(\n query=\"customer segmentation\",\n search_mode=\"hybrid\",\n semantic_threshold=0.7,\n text_threshold=0.5\n)\n\n# Automatic mode (uses available indexes)\nauto_results = await client.similarity_search(\n query=\"business metrics\",\n search_mode=\"allowed\" # Default\n)\n```\n\n## Examples\n\nThe [examples/](examples/) directory contains comprehensive examples:\n\n- [`client.ipynb`](examples/client.ipynb): Basic client usage\n- [`graphrag.ipynb`](examples/graphrag.ipynb): Complete GraphRAG implementation\n- [`search.ipynb`](examples/search.ipynb): Advanced search examples\n- [`chunking.ipynb`](examples/chunking.ipynb): Document chunking strategies\n- [`database_info.ipynb`](examples/database_info.ipynb): Database schema exploration\n\n## Core Components\n\n### GrafaClient\nThe main interface for all operations ([grafa/client.py](grafa/client.py)):\n- Document ingestion and processing\n- Entity extraction and relationship building\n- Search and retrieval operations\n- Database management\n\n### Node Types\nBuilt-in node types ([grafa/models.py](grafa/models.py)):\n- **GrafaDocument**: Represents uploaded documents\n- **GrafaChunk**: Document chunks with content and metadata\n- **GrafaDocumentHistory**: Version history for documents\n- **GrafaDatabase**: Database schema and configuration\n\n### Dynamic Models\nCustom node types generated from YAML ([grafa/dynamic_models.py](grafa/dynamic_models.py)):\n- Runtime model creation from schema\n- Automatic relationship validation\n- Field type mapping and validation\n\n## Development\n\n### Setup environment\n\nWe use [Hatch](https://hatch.pypa.io/latest/install/) to manage the development environment and production build. Ensure it's installed on your system.\n\n### Run unit tests\n\nYou can run all the tests with:\n\n```bash\nhatch run test\n```\n\n### Format the code\n\nExecute the following command to apply linting and check typing:\n\n```bash\nhatch run lint\n```\n\n### Publish a new version\n\nYou can bump the version, create a commit and associated tag with one command:\n\n```bash\nhatch version patch\n```\n\n```bash\nhatch version minor\n```\n\n```bash\nhatch version major\n```\n\nYour default Git text editor will open so you can add information about the release.\n\nWhen you push the tag on GitHub, the workflow will automatically publish it on PyPi and a GitHub release will be created as draft.\n\n## Serve the documentation\n\nYou can serve the Mkdocs documentation with:\n\n```bash\nhatch run docs-serve\n```\n\nIt'll automatically watch for changes in your code.\n\n## Requirements\n\n- Python 3.8+\n- Neo4j database (local or cloud)\n- OpenAI API key (for embeddings and LLM operations)\n- AWS credentials (if using S3 storage)\n\n## License\n\nThis project is licensed under the terms of MIT License.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Knowledge Graph Generation Library",
"version": "0.1.0",
"project_urls": {
"Changelog": "https://github.com/codingmaster8/grafa/releases",
"Documentation": "https://codingmaster8.github.io/grafa/",
"Issues": "https://github.com/codingmaster8/grafa/issues",
"Source": "https://github.com/codingmaster8/grafa"
},
"split_keywords": [
"embeddings",
" entity-extraction",
" graph-database",
" graph-rag",
" knowledge-graph",
" langchain",
" neo4j",
" nlp",
" rag"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "50ce737802164cc161f3a5ffbab8fa27dc3cf965cb85a91f67f749ff14c8ec02",
"md5": "7c3fa2313c18ad6a745da4c8e3d2d891",
"sha256": "af5b8489f306482204f77102919e93ea7e4b93e9de1644763fac8f862bfee41b"
},
"downloads": -1,
"filename": "grafa-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7c3fa2313c18ad6a745da4c8e3d2d891",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 78619,
"upload_time": "2025-11-02T05:57:14",
"upload_time_iso_8601": "2025-11-02T05:57:14.368465Z",
"url": "https://files.pythonhosted.org/packages/50/ce/737802164cc161f3a5ffbab8fa27dc3cf965cb85a91f67f749ff14c8ec02/grafa-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "055c9403046561339e64a967b50423c0e2c9bc27de27363e4e2545e424e0fa89",
"md5": "854d2e97322317af697d01c60d07489a",
"sha256": "dc08502e0688d88c987b65825de544099aa320aeced4137e941f801ee84efd7a"
},
"downloads": -1,
"filename": "grafa-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "854d2e97322317af697d01c60d07489a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 61283,
"upload_time": "2025-11-02T05:57:16",
"upload_time_iso_8601": "2025-11-02T05:57:16.041140Z",
"url": "https://files.pythonhosted.org/packages/05/5c/9403046561339e64a967b50423c0e2c9bc27de27363e4e2545e424e0fa89/grafa-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-02 05:57:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "codingmaster8",
"github_project": "grafa",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "aiobotocore",
"specs": [
[
"==",
"2.24.0"
]
]
},
{
"name": "async_lru",
"specs": [
[
"==",
"2.0.4"
]
]
},
{
"name": "fsspec",
"specs": [
[
"==",
"2025.2.0"
]
]
},
{
"name": "importlib-resources",
"specs": []
},
{
"name": "langchain",
"specs": [
[
"==",
"0.3.17"
]
]
},
{
"name": "langchain-aws",
"specs": [
[
"==",
"0.2.14"
]
]
},
{
"name": "langchain-community",
"specs": [
[
"==",
"0.3.16"
]
]
},
{
"name": "langchain-core",
"specs": [
[
"==",
"0.3.76"
]
]
},
{
"name": "langchain-openai",
"specs": [
[
"==",
"0.3.33"
]
]
},
{
"name": "langfuse",
"specs": [
[
"==",
"2.58.1"
]
]
},
{
"name": "neo4j",
"specs": [
[
"==",
"5.27.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
"==",
"1.6.1"
]
]
},
{
"name": "uuid7",
"specs": [
[
"==",
"0.1.0"
]
]
},
{
"name": "tiktoken",
"specs": [
[
">=",
"0.11.0"
]
]
},
{
"name": "aioboto3",
"specs": [
[
"==",
"15.1.0"
]
]
},
{
"name": "pdf2image",
"specs": [
[
"==",
"1.17.0"
]
]
},
{
"name": "striprtf",
"specs": [
[
"==",
"0.0.29"
]
]
},
{
"name": "aws-secretsmanager-caching",
"specs": [
[
"==",
"1.1.3"
]
]
},
{
"name": "markdown",
"specs": [
[
"==",
"3.9"
]
]
}
],
"lcname": "grafa"
}