querent


Namequerent JSON
Version 3.1.1 PyPI version JSON
download
home_pagehttps://github.com/Querent-ai/querent-ai
SummaryThe Asynchronous Data Dynamo and Graph Neural Network Catalyst
upload_time2024-05-13 07:21:27
maintainerNone
docs_urlNone
authorQuerent AI
requires_python<3.11,>=3.10
licenseBusiness Source License 1.1
keywords graph neural network scalability data-driven insights gnn async knowledge graphs kg large language models asyncio insights aysnchronous llm transformers pytorch llama-index ai artificial intelligence neo4j queues quiassisstant collectors data data science data engineering data analysis data analytics news nlp natural language processing text text analysis deep learning graphs graph theory graph algorithms graph analytics graph databases graph processing graph mining graph neural networks gnn gnns graph neural network
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Querent

The Asynchronous Data Dynamo and Graph Neural Network Catalyst

![image](https://github.com/Querent-ai/querent-ai/assets/61435908/39124a0c-3d9e-434f-9b54-9aa51dcefbd7)



## Unlock Insights, Asynchronous Scaling, and Forge a Knowledge-Driven Future

🚀 **Async at its Core**: Querent thrives in an asynchronous world. With asynchronous processing, we handle multiple data sources seamlessly, eliminating bottlenecks for utmost efficiency.

💡 **Knowledge Graphs Made Easy**: Constructing intricate knowledge graphs is a breeze. Querent's robust architecture simplifies building comprehensive knowledge graphs, enabling you to uncover hidden data relationships.

🌐 **Scalability Redefined**: Scaling your data operations is effortless with Querent. We scale horizontally, empowering you to process multiple data streams without breaking a sweat.

🔬 **GNN Integration**: Querent seamlessly integrates with Graph Neural Networks (GNNs), enabling advanced data analysis, recommendation systems, and predictive modeling.

🔍 **Data-Driven Insights**: Dive deep into data-driven insights with Querent's tools. Extract actionable information and make data-informed decisions with ease.

🧠 **Leverage Language Models**: Utilize state-of-the-art language models (LLMs) for text data. Querent empowers natural language processing, tackling complex text-based tasks.

📈 **Efficient Memory Usage**: Querent is mindful of memory constraints. Our framework uses memory-efficient techniques, ensuring you can handle large datasets economically.

## Table of Contents

- [Querent](#querent)
  - [Unlock Insights, Asynchronous Scaling, and Forge a Knowledge-Driven Future](#unlock-insights-asynchronous-scaling-and-forge-a-knowledge-driven-future)
  - [Table of Contents](#table-of-contents)
  - [Introduction](#introduction)
  - [Features](#features)
  - [Getting Started](#getting-started)
    - [Prerequisites](#prerequisites)
    - [Installation](#installation)
  - [Usage](#usage)
  - [Configuration](#configuration)
  - [Querent: an asynchronous engine for LLMs](#querent-an-asynchronous-engine-for-llms)
  - [Ease of Use](#ease-of-use)
  - [Contributing](#contributing)
  - [License](#license)

## Introduction

Querent is designed to simplify and optimize data collection and processing workflows. Whether you need to scrape web data, ingest files, preprocess text, or create complex knowledge graphs, Querent offers a flexible framework for building and scaling these processes.

## Features

- **Collectors:** Gather data from various sources asynchronously, including web scraping and file collection.

- **Ingestors:** Process collected data efficiently with custom transformations and filtering.

- **Processors:** Apply asynchronous data processing, including text preprocessing, cleaning, and feature extraction.

- **Engines:** Execute a suite of LLM engines to extract insights from data, leveraging parallel processing for enhanced efficiency.

- **Storage:** Store processed data in various storage systems, such as databases or cloud storage.

- **Workflow Management:** Efficiently manage and scale data workflows with task orchestration.

- **Scalability:** Querent is designed to scale horizontally, handling large volumes of data with ease.

## Getting Started

Let's get Querent up and running on your local machine.

### Prerequisites

- Python 3.9+
- Virtual environment (optional but recommended)

### Installation

1. Create a virtual environment (recommended):

   ```bash
   python -m venv venv
   source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
   ```
2. Install latest Querent Workflow Orchestrator package:

   ```bash
   pip install querent
   ```
  
3. Install the project dependencies:

   ```bash
     python3 -m spacy download en_core_web_lg
   ```

4. Apt install the project dependencies:
   ```bash
        sudo apt install tesseract-ocr
        sudo apt install libtesseract-dev
        sudo apt-get install ffmpeg
        sudo apt install antiword
    ```

## Usage

Querent provides a flexible framework that adapts to your specific data collection and processing needs. Here's how to get started:

1. **Configuration:** Set up collector, ingestor, and processor configurations as needed.

2. **Collecting Data:** Implement collector classes to gather data from chosen sources. Handle errors and edge cases gracefully.

3. **Processing Data:** Create ingestors and processors to clean, transform, and filter collected data. Apply custom logic to meet your requirements.

4. **Storage:** Choose your storage system (e.g., databases) and configure connections. Store processed data efficiently.

5. **Task Orchestration:** For large tasks, implement a task orchestrator to manage and distribute the workload.

6. **Scaling:** To handle scalability, consider running multiple instances of collectors and ingestors in parallel.

7. **Monitoring:** Implement monitoring and logging to track task progress, detect errors, and ensure smooth operation.

8. **Documentation:** Maintain thorough project documentation to make it easy for others (and yourself) to understand and contribute.

## Configuration

Querent relies on configuration files to define how collectors, ingestors, and processors operate. These files are typically located in the `config` directory. Ensure that you configure the components according to your project's requirements.

## Querent: an asynchronous engine for LLMs

**Sequence Diagram:** *Asynchronous Data Processing in Querent*

```mermaid
sequenceDiagram
    participant User
    participant Collector
    participant Ingestor
    participant Processor
    participant LLM
    participant Querent
    participant Storage
    participant Callback

    User->>Collector: Initiate Data Collection
    Collector->>Ingestor: Collect Data
    Ingestor->>Processor: Ingest Data
    Processor->>LLM: Process Data (IngestedTokens)
    LLM->>Processor: Processed Data (EventState)
    Processor->>Storage: Store Processed Data (CollectedBytes)
    Ingestor->>Querent: Send Ingested Data (IngestedTokens)
    Querent->>Processor: Process Ingested Data (IngestedTokens)
    Processor->>LLM: Process Data (IngestedTokens)
    LLM->>Processor: Processed Data (EventState)
    Callback->>Storage: Store Processed Data (EventState)
    Querent->>Processor: Processed Data Available (EventState)
    Processor->>Callback: Return Processed Data (EventState)
    Callback->>User: Deliver Processed Data (CollectedBytes)

    Note right of User: Asynchronous Flow
    Note right of Collector: Data Collection
    Note right of Ingestor: Data Ingestion
    Note right of Processor: Data Processing
    Note right of LLM: Language Model Processing
    Note right of Querent: Query Execution
    Note right of Storage: Data Storage
    Note right of Callback: Callback Invocation

```

## Ease of Use

With Querent, creating scalable workflows with any LLM is just a few lines of code.

```python
import pytest
import uuid
from pathlib import Path
import asyncio

from querent.callback.event_callback_interface import EventCallbackInterface
from querent.common.types.ingested_tokens import IngestedTokens
from querent.common.types.ingested_code import IngestedCode
from querent.common.types.ingested_images import IngestedImages
from querent.common.types.ingested_messages import IngestedMessages
from querent.common.types.querent_event import EventState, EventType
from querent.common.types.querent_queue import QuerentQueue
from querent.core.base_engine import BaseEngine
from querent.querent.querent import Querent
from querent.querent.resource_manager import ResourceManager
from querent.collectors.collector_resolver import CollectorResolver
from querent.common.uri import Uri
from querent.config.collector.collector_config import FSCollectorConfig
from querent.ingestors.ingestor_manager import IngestorFactoryManager

# Create input and output queues
input_queue = QuerentQueue()
resource_manager = ResourceManager()


# Define a simple mock LLM engine for testing
class MockLLMEngine(BaseEngine):
    def __init__(self, input_queue: QuerentQueue):
        super().__init__(input_queue)

    async def process_tokens(self, data: IngestedTokens):
        if data is None or data.is_error():
            # the LLM developer can raise an error here or do something else
            # the developers of Querent can customize the behavior of Querent
            # to handle the error in a way that is appropriate for the use case
            self.set_termination_event()
            return
        # Set the state of the LLM
        # At any given point during the execution of the LLM, the LLM developer
        # can set the state of the LLM using the set_state method
        # The state of the LLM is stored in the state attribute of the LLM
        # The state of the LLM is published to subscribers of the LLM
        current_state = EventState(EventType.Graph, 1.0, "anything", "dummy.txt")
        await self.set_state(new_state=current_state)

    async def process_code(self, data: IngestedCode):
        pass

    async def process_messages(self, data: IngestedMessages):
        return super().process_messages(data)

    async def process_images(self, data: IngestedImages):
        return super().process_images(data)

    def validate(self):
        return True


@pytest.mark.asyncio
async def test_example_workflow_with_querent():
    # Initialize some collectors to collect the data
    directory_path = "path/to/your/data/directory"
    collectors = [
        CollectorResolver().resolve(
            Uri("file://" + str(Path(directory_path).resolve())),
            FSCollectorConfig(root_path=directory_path, id=str(uuid.uuid4())),
        )
    ]

    # Connect to the collector
    for collector in collectors:
        await collector.connect()

    # Set up the result queue
    result_queue = asyncio.Queue()

    # Create the IngestorFactoryManager
    ingestor_factory_manager = IngestorFactoryManager(
        collectors=collectors, result_queue=result_queue
    )

    # Start the ingest_all_async in a separate task
    ingest_task = asyncio.create_task(ingestor_factory_manager.ingest_all_async())

    ### A Typical Use Case ###
    # Create an engine to harness the LLM
    llm_mocker = MockLLMEngine(input_queue)

    # Define a callback function to subscribe to state changes
    class StateChangeCallback(EventCallbackInterface):
        async def handle_event(self, event_type: EventType, event_state: EventState):
            print(f"New state: {event_state}")
            print(f"New state type: {event_type}")
            assert event_state.event_type == EventType.Graph

    # Subscribe to state change events
    # This pattern is ideal as we can expose multiple events for each use case of the LLM
    llm_mocker.subscribe(EventType.Graph, StateChangeCallback())

    ## one can also subscribe to other events, e.g. EventType.CHAT_COMPLETION ...

    # Create a Querent instance with a single MockLLM
    # here we see the simplicity of the Querent
    # massive complexity is hidden in the Querent,
    # while being highly configurable, extensible, and scalable
    # async architecture helps to scale to multiple querenters
    # How async architecture works:
    #   1. Querent starts a worker task for each querenter
    #   2. Querenter starts a worker task for each worker
    #   3. Each worker task runs in a loop, waiting for input data
    #   4. When input data is received, the worker task processes the data
    #   5. The worker task notifies subscribers of state changes
    #   6. The worker task repeats steps 3-5 until termination
    querent = Querent(
        [llm_mocker],
        resource_manager=resource_manager,
    )
    # Start the querent
    querent_task = asyncio.create_task(querent.start())
    await asyncio.gather(ingest_task, querent_task)


if __name__ == "__main__":
    asyncio.run(test_example_workflow_with_querent())


```

## Contributing

Contributions to Querent are welcome! Please follow our [contribution guidelines](CONTRIBUTING.md) to get started.

## License

This project is licensed under the BSL-1.1 License - see the [LICENSE](LICENCE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Querent-ai/querent-ai",
    "name": "querent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.11,>=3.10",
    "maintainer_email": null,
    "keywords": "Graph Neural Network, Scalability, Data-Driven Insights, GNN, Async, Knowledge Graphs, KG, Large Language Models, asyncio, Insights, aysnchronous, LLM, transformers, pytorch, Llama-index, AI, Artificial Intelligence, Neo4j, Queues, QuiAssisstant, Collectors, Data, Data Science, Data Engineering, Data Analysis, Data Analytics, News, NLP, Natural Language Processing, Text, Text Analysis, Deep Learning, Graphs, Graph Theory, Graph Algorithms, Graph Analytics, Graph Databases, Graph Processing, Graph Mining, Graph Neural Networks, GNN, GNNs, Graph Neural Network",
    "author": "Querent AI",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/46/8a/84af9099dadc38159c5fe4d080624a7d13b64ab0b2b097c2fdd9b344ad62/querent-3.1.1.tar.gz",
    "platform": null,
    "description": "# Querent\n\nThe Asynchronous Data Dynamo and Graph Neural Network Catalyst\n\n![image](https://github.com/Querent-ai/querent-ai/assets/61435908/39124a0c-3d9e-434f-9b54-9aa51dcefbd7)\n\n\n\n## Unlock Insights, Asynchronous Scaling, and Forge a Knowledge-Driven Future\n\n\ud83d\ude80 **Async at its Core**: Querent thrives in an asynchronous world. With asynchronous processing, we handle multiple data sources seamlessly, eliminating bottlenecks for utmost efficiency.\n\n\ud83d\udca1 **Knowledge Graphs Made Easy**: Constructing intricate knowledge graphs is a breeze. Querent's robust architecture simplifies building comprehensive knowledge graphs, enabling you to uncover hidden data relationships.\n\n\ud83c\udf10 **Scalability Redefined**: Scaling your data operations is effortless with Querent. We scale horizontally, empowering you to process multiple data streams without breaking a sweat.\n\n\ud83d\udd2c **GNN Integration**: Querent seamlessly integrates with Graph Neural Networks (GNNs), enabling advanced data analysis, recommendation systems, and predictive modeling.\n\n\ud83d\udd0d **Data-Driven Insights**: Dive deep into data-driven insights with Querent's tools. Extract actionable information and make data-informed decisions with ease.\n\n\ud83e\udde0 **Leverage Language Models**: Utilize state-of-the-art language models (LLMs) for text data. Querent empowers natural language processing, tackling complex text-based tasks.\n\n\ud83d\udcc8 **Efficient Memory Usage**: Querent is mindful of memory constraints. Our framework uses memory-efficient techniques, ensuring you can handle large datasets economically.\n\n## Table of Contents\n\n- [Querent](#querent)\n  - [Unlock Insights, Asynchronous Scaling, and Forge a Knowledge-Driven Future](#unlock-insights-asynchronous-scaling-and-forge-a-knowledge-driven-future)\n  - [Table of Contents](#table-of-contents)\n  - [Introduction](#introduction)\n  - [Features](#features)\n  - [Getting Started](#getting-started)\n    - [Prerequisites](#prerequisites)\n    - [Installation](#installation)\n  - [Usage](#usage)\n  - [Configuration](#configuration)\n  - [Querent: an asynchronous engine for LLMs](#querent-an-asynchronous-engine-for-llms)\n  - [Ease of Use](#ease-of-use)\n  - [Contributing](#contributing)\n  - [License](#license)\n\n## Introduction\n\nQuerent is designed to simplify and optimize data collection and processing workflows. Whether you need to scrape web data, ingest files, preprocess text, or create complex knowledge graphs, Querent offers a flexible framework for building and scaling these processes.\n\n## Features\n\n- **Collectors:** Gather data from various sources asynchronously, including web scraping and file collection.\n\n- **Ingestors:** Process collected data efficiently with custom transformations and filtering.\n\n- **Processors:** Apply asynchronous data processing, including text preprocessing, cleaning, and feature extraction.\n\n- **Engines:** Execute a suite of LLM engines to extract insights from data, leveraging parallel processing for enhanced efficiency.\n\n- **Storage:** Store processed data in various storage systems, such as databases or cloud storage.\n\n- **Workflow Management:** Efficiently manage and scale data workflows with task orchestration.\n\n- **Scalability:** Querent is designed to scale horizontally, handling large volumes of data with ease.\n\n## Getting Started\n\nLet's get Querent up and running on your local machine.\n\n### Prerequisites\n\n- Python 3.9+\n- Virtual environment (optional but recommended)\n\n### Installation\n\n1. Create a virtual environment (recommended):\n\n   ```bash\n   python -m venv venv\n   source venv/bin/activate  # On Windows, use `venv\\Scripts\\activate`\n   ```\n2. Install latest Querent Workflow Orchestrator package:\n\n   ```bash\n   pip install querent\n   ```\n  \n3. Install the project dependencies:\n\n   ```bash\n     python3 -m spacy download en_core_web_lg\n   ```\n\n4. Apt install the project dependencies:\n   ```bash\n        sudo apt install tesseract-ocr\n        sudo apt install libtesseract-dev\n        sudo apt-get install ffmpeg\n        sudo apt install antiword\n    ```\n\n## Usage\n\nQuerent provides a flexible framework that adapts to your specific data collection and processing needs. Here's how to get started:\n\n1. **Configuration:** Set up collector, ingestor, and processor configurations as needed.\n\n2. **Collecting Data:** Implement collector classes to gather data from chosen sources. Handle errors and edge cases gracefully.\n\n3. **Processing Data:** Create ingestors and processors to clean, transform, and filter collected data. Apply custom logic to meet your requirements.\n\n4. **Storage:** Choose your storage system (e.g., databases) and configure connections. Store processed data efficiently.\n\n5. **Task Orchestration:** For large tasks, implement a task orchestrator to manage and distribute the workload.\n\n6. **Scaling:** To handle scalability, consider running multiple instances of collectors and ingestors in parallel.\n\n7. **Monitoring:** Implement monitoring and logging to track task progress, detect errors, and ensure smooth operation.\n\n8. **Documentation:** Maintain thorough project documentation to make it easy for others (and yourself) to understand and contribute.\n\n## Configuration\n\nQuerent relies on configuration files to define how collectors, ingestors, and processors operate. These files are typically located in the `config` directory. Ensure that you configure the components according to your project's requirements.\n\n## Querent: an asynchronous engine for LLMs\n\n**Sequence Diagram:** *Asynchronous Data Processing in Querent*\n\n```mermaid\nsequenceDiagram\n    participant User\n    participant Collector\n    participant Ingestor\n    participant Processor\n    participant LLM\n    participant Querent\n    participant Storage\n    participant Callback\n\n    User->>Collector: Initiate Data Collection\n    Collector->>Ingestor: Collect Data\n    Ingestor->>Processor: Ingest Data\n    Processor->>LLM: Process Data (IngestedTokens)\n    LLM->>Processor: Processed Data (EventState)\n    Processor->>Storage: Store Processed Data (CollectedBytes)\n    Ingestor->>Querent: Send Ingested Data (IngestedTokens)\n    Querent->>Processor: Process Ingested Data (IngestedTokens)\n    Processor->>LLM: Process Data (IngestedTokens)\n    LLM->>Processor: Processed Data (EventState)\n    Callback->>Storage: Store Processed Data (EventState)\n    Querent->>Processor: Processed Data Available (EventState)\n    Processor->>Callback: Return Processed Data (EventState)\n    Callback->>User: Deliver Processed Data (CollectedBytes)\n\n    Note right of User: Asynchronous Flow\n    Note right of Collector: Data Collection\n    Note right of Ingestor: Data Ingestion\n    Note right of Processor: Data Processing\n    Note right of LLM: Language Model Processing\n    Note right of Querent: Query Execution\n    Note right of Storage: Data Storage\n    Note right of Callback: Callback Invocation\n\n```\n\n## Ease of Use\n\nWith Querent, creating scalable workflows with any LLM is just a few lines of code.\n\n```python\nimport pytest\nimport uuid\nfrom pathlib import Path\nimport asyncio\n\nfrom querent.callback.event_callback_interface import EventCallbackInterface\nfrom querent.common.types.ingested_tokens import IngestedTokens\nfrom querent.common.types.ingested_code import IngestedCode\nfrom querent.common.types.ingested_images import IngestedImages\nfrom querent.common.types.ingested_messages import IngestedMessages\nfrom querent.common.types.querent_event import EventState, EventType\nfrom querent.common.types.querent_queue import QuerentQueue\nfrom querent.core.base_engine import BaseEngine\nfrom querent.querent.querent import Querent\nfrom querent.querent.resource_manager import ResourceManager\nfrom querent.collectors.collector_resolver import CollectorResolver\nfrom querent.common.uri import Uri\nfrom querent.config.collector.collector_config import FSCollectorConfig\nfrom querent.ingestors.ingestor_manager import IngestorFactoryManager\n\n# Create input and output queues\ninput_queue = QuerentQueue()\nresource_manager = ResourceManager()\n\n\n# Define a simple mock LLM engine for testing\nclass MockLLMEngine(BaseEngine):\n    def __init__(self, input_queue: QuerentQueue):\n        super().__init__(input_queue)\n\n    async def process_tokens(self, data: IngestedTokens):\n        if data is None or data.is_error():\n            # the LLM developer can raise an error here or do something else\n            # the developers of Querent can customize the behavior of Querent\n            # to handle the error in a way that is appropriate for the use case\n            self.set_termination_event()\n            return\n        # Set the state of the LLM\n        # At any given point during the execution of the LLM, the LLM developer\n        # can set the state of the LLM using the set_state method\n        # The state of the LLM is stored in the state attribute of the LLM\n        # The state of the LLM is published to subscribers of the LLM\n        current_state = EventState(EventType.Graph, 1.0, \"anything\", \"dummy.txt\")\n        await self.set_state(new_state=current_state)\n\n    async def process_code(self, data: IngestedCode):\n        pass\n\n    async def process_messages(self, data: IngestedMessages):\n        return super().process_messages(data)\n\n    async def process_images(self, data: IngestedImages):\n        return super().process_images(data)\n\n    def validate(self):\n        return True\n\n\n@pytest.mark.asyncio\nasync def test_example_workflow_with_querent():\n    # Initialize some collectors to collect the data\n    directory_path = \"path/to/your/data/directory\"\n    collectors = [\n        CollectorResolver().resolve(\n            Uri(\"file://\" + str(Path(directory_path).resolve())),\n            FSCollectorConfig(root_path=directory_path, id=str(uuid.uuid4())),\n        )\n    ]\n\n    # Connect to the collector\n    for collector in collectors:\n        await collector.connect()\n\n    # Set up the result queue\n    result_queue = asyncio.Queue()\n\n    # Create the IngestorFactoryManager\n    ingestor_factory_manager = IngestorFactoryManager(\n        collectors=collectors, result_queue=result_queue\n    )\n\n    # Start the ingest_all_async in a separate task\n    ingest_task = asyncio.create_task(ingestor_factory_manager.ingest_all_async())\n\n    ### A Typical Use Case ###\n    # Create an engine to harness the LLM\n    llm_mocker = MockLLMEngine(input_queue)\n\n    # Define a callback function to subscribe to state changes\n    class StateChangeCallback(EventCallbackInterface):\n        async def handle_event(self, event_type: EventType, event_state: EventState):\n            print(f\"New state: {event_state}\")\n            print(f\"New state type: {event_type}\")\n            assert event_state.event_type == EventType.Graph\n\n    # Subscribe to state change events\n    # This pattern is ideal as we can expose multiple events for each use case of the LLM\n    llm_mocker.subscribe(EventType.Graph, StateChangeCallback())\n\n    ## one can also subscribe to other events, e.g. EventType.CHAT_COMPLETION ...\n\n    # Create a Querent instance with a single MockLLM\n    # here we see the simplicity of the Querent\n    # massive complexity is hidden in the Querent,\n    # while being highly configurable, extensible, and scalable\n    # async architecture helps to scale to multiple querenters\n    # How async architecture works:\n    #   1. Querent starts a worker task for each querenter\n    #   2. Querenter starts a worker task for each worker\n    #   3. Each worker task runs in a loop, waiting for input data\n    #   4. When input data is received, the worker task processes the data\n    #   5. The worker task notifies subscribers of state changes\n    #   6. The worker task repeats steps 3-5 until termination\n    querent = Querent(\n        [llm_mocker],\n        resource_manager=resource_manager,\n    )\n    # Start the querent\n    querent_task = asyncio.create_task(querent.start())\n    await asyncio.gather(ingest_task, querent_task)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(test_example_workflow_with_querent())\n\n\n```\n\n## Contributing\n\nContributions to Querent are welcome! Please follow our [contribution guidelines](CONTRIBUTING.md) to get started.\n\n## License\n\nThis project is licensed under the BSL-1.1 License - see the [LICENSE](LICENCE) file for details.\n",
    "bugtrack_url": null,
    "license": "Business Source License 1.1",
    "summary": "The Asynchronous Data Dynamo and Graph Neural Network Catalyst",
    "version": "3.1.1",
    "project_urls": {
        "Documentation": "https://github.com/Querent-ai/querent-ai/docs",
        "Homepage": "https://github.com/Querent-ai/querent-ai",
        "Issue Tracker": "https://github.com/Querent-ai/querent-ai/issues"
    },
    "split_keywords": [
        "graph neural network",
        " scalability",
        " data-driven insights",
        " gnn",
        " async",
        " knowledge graphs",
        " kg",
        " large language models",
        " asyncio",
        " insights",
        " aysnchronous",
        " llm",
        " transformers",
        " pytorch",
        " llama-index",
        " ai",
        " artificial intelligence",
        " neo4j",
        " queues",
        " quiassisstant",
        " collectors",
        " data",
        " data science",
        " data engineering",
        " data analysis",
        " data analytics",
        " news",
        " nlp",
        " natural language processing",
        " text",
        " text analysis",
        " deep learning",
        " graphs",
        " graph theory",
        " graph algorithms",
        " graph analytics",
        " graph databases",
        " graph processing",
        " graph mining",
        " graph neural networks",
        " gnn",
        " gnns",
        " graph neural network"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a1d06e984affe2a1f0c2c6be39a29695d075b3a4222ef520b24a9272d8569872",
                "md5": "54b0d14173d5861bd6881cdce000f9df",
                "sha256": "a3aff75a03f1152905ceaf79f8adeed08726d9f0ff8bab45b1d7f189a7739e69"
            },
            "downloads": -1,
            "filename": "querent-3.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "54b0d14173d5861bd6881cdce000f9df",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.11,>=3.10",
            "size": 171557,
            "upload_time": "2024-05-13T07:21:25",
            "upload_time_iso_8601": "2024-05-13T07:21:25.161226Z",
            "url": "https://files.pythonhosted.org/packages/a1/d0/6e984affe2a1f0c2c6be39a29695d075b3a4222ef520b24a9272d8569872/querent-3.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "468a84af9099dadc38159c5fe4d080624a7d13b64ab0b2b097c2fdd9b344ad62",
                "md5": "b1e1a6cf73cb32d4e76c96b7614c3b98",
                "sha256": "e59923c1c41c453c6d20883c08232b15578a849f13b4894792cb1a421edfb45a"
            },
            "downloads": -1,
            "filename": "querent-3.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "b1e1a6cf73cb32d4e76c96b7614c3b98",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.11,>=3.10",
            "size": 112761,
            "upload_time": "2024-05-13T07:21:27",
            "upload_time_iso_8601": "2024-05-13T07:21:27.798704Z",
            "url": "https://files.pythonhosted.org/packages/46/8a/84af9099dadc38159c5fe4d080624a7d13b64ab0b2b097c2fdd9b344ad62/querent-3.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-13 07:21:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Querent-ai",
    "github_project": "querent-ai",
    "github_not_found": true,
    "lcname": "querent"
}
        
Elapsed time: 0.46613s