neuro-san


Nameneuro-san JSON
Version 0.5.43 PyPI version JSON
download
home_pageNone
SummaryNeuroAI data-driven System for multi-Agent Networks - client, library and server
upload_time2025-07-11 17:49:53
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords llm langchain agent multi-agent
VCS
bugtrack_url
requirements leaf-common leaf-server-common grpcio grpcio-health-checking grpcio-reflection grpcio-tools protobuf pyhocon pyOpenSSL boto3 botocore idna urllib3 aiohttp ruamel.yaml hvac langchain langchain-anthropic langchain-community langchain-google-genai langchain-openai langchain-nvidia-ai-endpoints langchain-ollama openai tiktoken bs4 pydantic httpx tornado janus watchdog validators timedinput duckduckgo_search json-repair
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Neuro SAN Data-Driven Agents

[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/cognizant-ai-lab/neuro-san)

**Neuro AI system of agent networks (Neuro SAN)** is a library for building data-driven multi-agent networks
which can be run as a library, or served up via an HTTP/gRPC server.

Motivation: People come with all their hopes and dreams to lay them at the altar
of a single LLM/agent expecting it to do the most complex tasks.  This often fails
because the scope is often too big for a single LLM to handle.  People expect the
equivalent of an adult PhD to be at their disposal, but what you really get is a high-school intern.

Solution: Allow these problems to be broken up into smaller pieces so that multiple LLM-enabled
agents can communicate with each other to solve a single problem.

Neuro SAN agent networks can be entirely specified in a data-only
[HOCON](https://github.com/lightbend/config/blob/main/HOCON.md)
file format (think: JSON with comments, among other things), enabling subject matter experts
to be the authors of complex agent networks, not just programmers.

Neuro SAN agent networks can also call CodedTools (langchain or our own interface) which do things
that LLMs can't on their own like: Query a web service, effectuate change via a web API, handle
private data correctly, do complex math operations, copy large bits of data without error.
While this aspect _does_ require programming skills, what the savvy gain with Neuro SAN is a new way
to think about your problems that involves a weave between natural language tasks that LLMs are good at
and traditional computing tasks which deterministic Python code gives you.

Neuro SAN also offers:

* channels for private data (aka sly_data) that should be kept out of LLM chat streams
* LLM-provider agnosticism and extensibility of data-only-configured LLMs when new hotness arrives.
* agent-specific LLM specifications - use the right LLM for the cost/latency/context-window/data-privacy each agent needs.
* fallback LLM specifications for when your fave goes down.
* powerful debugging information for gaining insight into your mutli-agent systems.
* server-readiness at scale
* enabling of distributed agent webs that call each other to work together, wheverer they are hosted.
* security-by-default - you set what private data is to be shared downstream/upstream
* test infrastructure for your agent networks, including:
    * data-driven test cases
    * the ability for LLMs to test your agent networks
    * an Assessor app which classifies the modes of failure for your agents, given a data-driven test case

## Running client and server

### Prep

#### Setup your virtual environment

##### Install Python dependencies

Set PYTHONPATH environment variable

    export PYTHONPATH=$(pwd)

Create and activate a new virtual environment:

    python3 -m venv venv
    . ./venv/bin/activate
    pip install neuro-san

OR from the neuro-san project top-level:
Install packages specified in the following requirements files:

    pip install -r requirements.txt

##### Set necessary environment variables

In a terminal window, set at least these environment variables:

    export OPENAI_API_KEY="XXX_YOUR_OPENAI_API_KEY_HERE"

Any other API key environment variables for other LLM provider(s) also need to be set if you are using them.

### Using as a library (Direct)

From the top-level of this repo:

    python -m neuro_san.client.agent_cli --agent hello_world

Type in this input to the chat client:

    From earth, I approach a new planet and wish to send a short 2-word greeting to the new orb.

What should return is something like:

    Hello, world.

... but you are dealing with LLMs. Your results will vary!

### Client/Server Setup

#### Server

In the same terminal window, be sure the environment variable(s) listed above
are set before proceeding.

Option 1: Run the service directly.  (Most useful for development)

    python -m neuro_san.service.main_loop.server_main_loop

Option 2: Build and run the docker container for the hosting agent service:

    ./neuro_san/deploy/build.sh ; ./neuro_san/deploy/run.sh

These build.sh / Dockerfile / run.sh scripts are intended to be portable so they can be used with
your own projects' registries and coded_tools work.

ℹ️ Ensure the required environment variables (OPENAI_API_KEY, AGENT_TOOL_PATH, and PYTHONPATH) are passed into the
container — either by exporting them before running run.sh, or by configuring them inside the script

#### Client

In another terminal start the chat client:

    python -m neuro_san.client.agent_cli --http --agent hello_world

### Extra info about agent_cli.py

There is help to be had with --help.

By design, you cannot see all agents registered with the service from the client.

When the chat client is given a newline as input, that implies "send the message".
This isn't great when you are copy/pasting multi-line input.  For that there is a
--first_prompt_file argument where you can specify a file to send as the first
message.

You can send private data that does not go into the chat stream as a single escaped
string of a JSON dictionary. For example:
--sly_data "{ \"login\": \"your_login\" }"

## Running Python unit/integration tests

To run Python unit/integration tests, follow the [instructions](docs/tests.md) here.

## Creating a new agent network

### Agent example files

Look at the hocon files in ./neuro_san/registries for examples of specific agent networks.

The natural question to ask is: What is a hocon file?
The simplest answer is that you can think of a hocon file as a JSON file that allows for comments.

Here are some descriptions of the example hocon files provided in this repo.
To play with them, specify their stem as the argument for --agent on the agent_cli.py chat client.
In some order of complexity, they are:

* hello_world

    This is the initial example used above and demonstrates
    a front-man agent talking to another agent downstream.

* esp_decision_assistant

    Very abstract, but also very powerful.
    A front man agent gathers information about a decision to make
    in ESP terms.  It then calls a prescriptor which in turn
    calls one or more predictors in order to help make the decision
    in an LLM-based ESP manner.

When coming up with new hocon files in that same directory, also add an entry for it
in the manifest.hocon file.

build.sh / run.sh the service like you did above to re-load the server,
and interact with it via the agent_cli.py chat client, making sure
you specify your agent correctly (per the hocon file stem).

### More agent example files

Note that the .hocon files in this repo are more spartan for testing and simple
demonstration purposes.

For more examples of agent networks, documentation and tutorials,
see the [neuro-san-studio repo.](https://github.com/cognizant-ai-lab/neuro-san-studio)

For a complete list of agent networks keys, see the [agent hocon file reference](docs/agent_hocon_reference.md)

### Manifest file

All agents used need to have an entry in a single manifest hocon file.
For the neuro-san repo, this is: neuro_san/registries/manifest.hocon.

When you create your own repo for your own agents, that will be different
and you will need to create your own manifest file.  To point the system
at your own manifest file, set a new environment variable:

    export AGENT_MANIFEST_FILE=<your_repo>/registries/manifest.hocon

## Infrastructure

The agent infrastructure is run as a library, an HTTP service and/or a gRPC service.
Access to agents is implemented (client and server) using the
[AgentSession](https://github.com/cognizant-ai-lab/neuro-san/blob/main/neuro_san/interfaces/agent_session.py)
interface:

It has 2 main methods:

* function()

    This tells the client what the top-level agent will do for it.

* streaming_chat()

    This is the main entry point. Send some text and it starts a conversation
    with a "front man" agent.  If that guy needs more information it will ask
    you and you return your answer via another call to the chat() interface.
    ChatMessage Results from this method are streamed and when the conversation
    is over, the stream itself closes after the last message has been received.

    ChatMessages of various types will come back over the stream.
    Anything of type AI is the front-man answering you on behalf of the rest of
    its agent posse, so this is the kind you want to pay the most attention to.

Implementations of the AgentSession interface:

* DirectAgentSession class.  Use this if you want to call neuro-san as a library
* GrpcServiceAgentSession class. Use this if you want to call neuro-san as a client to a gRPC service
* HttpServiceAgentSession class. Use this if you want to call neuro-san as a client to a HTTP service

Note that agent_cli uses all of these.  You can look at the source code there for examples.

There are also some asynchoronous implementations available of the
[AsyncAgentSession](https://github.com/cognizant-ai-lab/neuro-san/blob/main/neuro_san/interfaces/async_agent_session.py)
interface:

## Advanced concepts

### Coded Tools

Most of the examples provided here show how no-code agents are put together,
but neuro-san agent networks support the notion of coded tools for
low-code solutions.

These are most often used when an agent needs to call out to a specific
web service, but they can be any kind of Python code as long it
derives from the CodedTool interface defined in neuro_san/interfaces/coded_tool.py.

The main interface for this class looks like this:

     async def async_invoke(self, args: Dict[str, Any], sly_data: Dict[str, Any]) -> Any:

Note that while a synchronous version of this method is available for tire-kicking convenience,
this asynchronous interface is the preferred entry point because neuro-san itself is designed
to operate in an asynchronous server environment to enhance agent parallelism.

The args are an an argument dictionary passed in by the calling LLM, whose keys
are defined in the agent's hocon entry for the CodedTool.

The intent with sly_data is that the data in this dictionary is to never supposed to enter the chat stream.
Most often this is private data, but sly_data can also be used as a bulletin-board as a place
for CodedTools to cooperate on their results.

Sly data has many potential originations:

* sent explicitly by a client (usernames, tokens, session ids, etc),
* generated by other CodedTools
* generated by other agent networks.

See the class and method comments in neuro_san/interfaces/coded_tool.py for more information.

When you develop your own coded tools, there is another environment variable
that comes into play:

    export AGENT_TOOL_PATH=<your_repo>/coded_tools

Beneath this, classes are dynamically resolved based on their agent name.
That is, if you added a new coded tool to your agent, its file path would
look like this:

    <your_repo>/coded_tools/<your_agent_name>/<your_coded_tool>.py

## Creating Clients

To create clients, follow the [instructions](docs/clients.md) here.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "neuro-san",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "LLM, langchain, agent, multi-agent",
    "author": null,
    "author_email": "Dan Fink <Daniel.Fink@cognizant.com>",
    "download_url": "https://files.pythonhosted.org/packages/ba/f5/5bb08e715eaf9611d846e3696ab5eecb7dc2238f8edbbcaa461c1e3ba6cb/neuro_san-0.5.43.tar.gz",
    "platform": null,
    "description": "# Neuro SAN Data-Driven Agents\n\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/cognizant-ai-lab/neuro-san)\n\n**Neuro AI system of agent networks (Neuro SAN)** is a library for building data-driven multi-agent networks\nwhich can be run as a library, or served up via an HTTP/gRPC server.\n\nMotivation: People come with all their hopes and dreams to lay them at the altar\nof a single LLM/agent expecting it to do the most complex tasks.  This often fails\nbecause the scope is often too big for a single LLM to handle.  People expect the\nequivalent of an adult PhD to be at their disposal, but what you really get is a high-school intern.\n\nSolution: Allow these problems to be broken up into smaller pieces so that multiple LLM-enabled\nagents can communicate with each other to solve a single problem.\n\nNeuro SAN agent networks can be entirely specified in a data-only\n[HOCON](https://github.com/lightbend/config/blob/main/HOCON.md)\nfile format (think: JSON with comments, among other things), enabling subject matter experts\nto be the authors of complex agent networks, not just programmers.\n\nNeuro SAN agent networks can also call CodedTools (langchain or our own interface) which do things\nthat LLMs can't on their own like: Query a web service, effectuate change via a web API, handle\nprivate data correctly, do complex math operations, copy large bits of data without error.\nWhile this aspect _does_ require programming skills, what the savvy gain with Neuro SAN is a new way\nto think about your problems that involves a weave between natural language tasks that LLMs are good at\nand traditional computing tasks which deterministic Python code gives you.\n\nNeuro SAN also offers:\n\n* channels for private data (aka sly_data) that should be kept out of LLM chat streams\n* LLM-provider agnosticism and extensibility of data-only-configured LLMs when new hotness arrives.\n* agent-specific LLM specifications - use the right LLM for the cost/latency/context-window/data-privacy each agent needs.\n* fallback LLM specifications for when your fave goes down.\n* powerful debugging information for gaining insight into your mutli-agent systems.\n* server-readiness at scale\n* enabling of distributed agent webs that call each other to work together, wheverer they are hosted.\n* security-by-default - you set what private data is to be shared downstream/upstream\n* test infrastructure for your agent networks, including:\n    * data-driven test cases\n    * the ability for LLMs to test your agent networks\n    * an Assessor app which classifies the modes of failure for your agents, given a data-driven test case\n\n## Running client and server\n\n### Prep\n\n#### Setup your virtual environment\n\n##### Install Python dependencies\n\nSet PYTHONPATH environment variable\n\n    export PYTHONPATH=$(pwd)\n\nCreate and activate a new virtual environment:\n\n    python3 -m venv venv\n    . ./venv/bin/activate\n    pip install neuro-san\n\nOR from the neuro-san project top-level:\nInstall packages specified in the following requirements files:\n\n    pip install -r requirements.txt\n\n##### Set necessary environment variables\n\nIn a terminal window, set at least these environment variables:\n\n    export OPENAI_API_KEY=\"XXX_YOUR_OPENAI_API_KEY_HERE\"\n\nAny other API key environment variables for other LLM provider(s) also need to be set if you are using them.\n\n### Using as a library (Direct)\n\nFrom the top-level of this repo:\n\n    python -m neuro_san.client.agent_cli --agent hello_world\n\nType in this input to the chat client:\n\n    From earth, I approach a new planet and wish to send a short 2-word greeting to the new orb.\n\nWhat should return is something like:\n\n    Hello, world.\n\n... but you are dealing with LLMs. Your results will vary!\n\n### Client/Server Setup\n\n#### Server\n\nIn the same terminal window, be sure the environment variable(s) listed above\nare set before proceeding.\n\nOption 1: Run the service directly.  (Most useful for development)\n\n    python -m neuro_san.service.main_loop.server_main_loop\n\nOption 2: Build and run the docker container for the hosting agent service:\n\n    ./neuro_san/deploy/build.sh ; ./neuro_san/deploy/run.sh\n\nThese build.sh / Dockerfile / run.sh scripts are intended to be portable so they can be used with\nyour own projects' registries and coded_tools work.\n\n\u2139\ufe0f Ensure the required environment variables (OPENAI_API_KEY, AGENT_TOOL_PATH, and PYTHONPATH) are passed into the\ncontainer \u2014 either by exporting them before running run.sh, or by configuring them inside the script\n\n#### Client\n\nIn another terminal start the chat client:\n\n    python -m neuro_san.client.agent_cli --http --agent hello_world\n\n### Extra info about agent_cli.py\n\nThere is help to be had with --help.\n\nBy design, you cannot see all agents registered with the service from the client.\n\nWhen the chat client is given a newline as input, that implies \"send the message\".\nThis isn't great when you are copy/pasting multi-line input.  For that there is a\n--first_prompt_file argument where you can specify a file to send as the first\nmessage.\n\nYou can send private data that does not go into the chat stream as a single escaped\nstring of a JSON dictionary. For example:\n--sly_data \"{ \\\"login\\\": \\\"your_login\\\" }\"\n\n## Running Python unit/integration tests\n\nTo run Python unit/integration tests, follow the [instructions](docs/tests.md) here.\n\n## Creating a new agent network\n\n### Agent example files\n\nLook at the hocon files in ./neuro_san/registries for examples of specific agent networks.\n\nThe natural question to ask is: What is a hocon file?\nThe simplest answer is that you can think of a hocon file as a JSON file that allows for comments.\n\nHere are some descriptions of the example hocon files provided in this repo.\nTo play with them, specify their stem as the argument for --agent on the agent_cli.py chat client.\nIn some order of complexity, they are:\n\n* hello_world\n\n    This is the initial example used above and demonstrates\n    a front-man agent talking to another agent downstream.\n\n* esp_decision_assistant\n\n    Very abstract, but also very powerful.\n    A front man agent gathers information about a decision to make\n    in ESP terms.  It then calls a prescriptor which in turn\n    calls one or more predictors in order to help make the decision\n    in an LLM-based ESP manner.\n\nWhen coming up with new hocon files in that same directory, also add an entry for it\nin the manifest.hocon file.\n\nbuild.sh / run.sh the service like you did above to re-load the server,\nand interact with it via the agent_cli.py chat client, making sure\nyou specify your agent correctly (per the hocon file stem).\n\n### More agent example files\n\nNote that the .hocon files in this repo are more spartan for testing and simple\ndemonstration purposes.\n\nFor more examples of agent networks, documentation and tutorials,\nsee the [neuro-san-studio repo.](https://github.com/cognizant-ai-lab/neuro-san-studio)\n\nFor a complete list of agent networks keys, see the [agent hocon file reference](docs/agent_hocon_reference.md)\n\n### Manifest file\n\nAll agents used need to have an entry in a single manifest hocon file.\nFor the neuro-san repo, this is: neuro_san/registries/manifest.hocon.\n\nWhen you create your own repo for your own agents, that will be different\nand you will need to create your own manifest file.  To point the system\nat your own manifest file, set a new environment variable:\n\n    export AGENT_MANIFEST_FILE=<your_repo>/registries/manifest.hocon\n\n## Infrastructure\n\nThe agent infrastructure is run as a library, an HTTP service and/or a gRPC service.\nAccess to agents is implemented (client and server) using the\n[AgentSession](https://github.com/cognizant-ai-lab/neuro-san/blob/main/neuro_san/interfaces/agent_session.py)\ninterface:\n\nIt has 2 main methods:\n\n* function()\n\n    This tells the client what the top-level agent will do for it.\n\n* streaming_chat()\n\n    This is the main entry point. Send some text and it starts a conversation\n    with a \"front man\" agent.  If that guy needs more information it will ask\n    you and you return your answer via another call to the chat() interface.\n    ChatMessage Results from this method are streamed and when the conversation\n    is over, the stream itself closes after the last message has been received.\n\n    ChatMessages of various types will come back over the stream.\n    Anything of type AI is the front-man answering you on behalf of the rest of\n    its agent posse, so this is the kind you want to pay the most attention to.\n\nImplementations of the AgentSession interface:\n\n* DirectAgentSession class.  Use this if you want to call neuro-san as a library\n* GrpcServiceAgentSession class. Use this if you want to call neuro-san as a client to a gRPC service\n* HttpServiceAgentSession class. Use this if you want to call neuro-san as a client to a HTTP service\n\nNote that agent_cli uses all of these.  You can look at the source code there for examples.\n\nThere are also some asynchoronous implementations available of the\n[AsyncAgentSession](https://github.com/cognizant-ai-lab/neuro-san/blob/main/neuro_san/interfaces/async_agent_session.py)\ninterface:\n\n## Advanced concepts\n\n### Coded Tools\n\nMost of the examples provided here show how no-code agents are put together,\nbut neuro-san agent networks support the notion of coded tools for\nlow-code solutions.\n\nThese are most often used when an agent needs to call out to a specific\nweb service, but they can be any kind of Python code as long it\nderives from the CodedTool interface defined in neuro_san/interfaces/coded_tool.py.\n\nThe main interface for this class looks like this:\n\n     async def async_invoke(self, args: Dict[str, Any], sly_data: Dict[str, Any]) -> Any:\n\nNote that while a synchronous version of this method is available for tire-kicking convenience,\nthis asynchronous interface is the preferred entry point because neuro-san itself is designed\nto operate in an asynchronous server environment to enhance agent parallelism.\n\nThe args are an an argument dictionary passed in by the calling LLM, whose keys\nare defined in the agent's hocon entry for the CodedTool.\n\nThe intent with sly_data is that the data in this dictionary is to never supposed to enter the chat stream.\nMost often this is private data, but sly_data can also be used as a bulletin-board as a place\nfor CodedTools to cooperate on their results.\n\nSly data has many potential originations:\n\n* sent explicitly by a client (usernames, tokens, session ids, etc),\n* generated by other CodedTools\n* generated by other agent networks.\n\nSee the class and method comments in neuro_san/interfaces/coded_tool.py for more information.\n\nWhen you develop your own coded tools, there is another environment variable\nthat comes into play:\n\n    export AGENT_TOOL_PATH=<your_repo>/coded_tools\n\nBeneath this, classes are dynamically resolved based on their agent name.\nThat is, if you added a new coded tool to your agent, its file path would\nlook like this:\n\n    <your_repo>/coded_tools/<your_agent_name>/<your_coded_tool>.py\n\n## Creating Clients\n\nTo create clients, follow the [instructions](docs/clients.md) here.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "NeuroAI data-driven System for multi-Agent Networks - client, library and server",
    "version": "0.5.43",
    "project_urls": {
        "Documentation": "https://github.com/cognizant-ai-lab/neuro-san#readme",
        "Homepage": "https://github.com/cognizant-ai-lab/neuro-san",
        "Repository": "https://github.com/cognizant-ai-lab/neuro-san"
    },
    "split_keywords": [
        "llm",
        " langchain",
        " agent",
        " multi-agent"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3fa5c669a7f716ce459d138ca09644e294ec8df70bd38b702a07043b2422cdd2",
                "md5": "5380db4f0f87af10a82dc721b4200d89",
                "sha256": "f89e8f67280aeba48d0f5a581f25f02dbb8bf10d4dc8bb10f1ed0d1fcca4e713"
            },
            "downloads": -1,
            "filename": "neuro_san-0.5.43-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5380db4f0f87af10a82dc721b4200d89",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 1106894,
            "upload_time": "2025-07-11T17:49:51",
            "upload_time_iso_8601": "2025-07-11T17:49:51.996922Z",
            "url": "https://files.pythonhosted.org/packages/3f/a5/c669a7f716ce459d138ca09644e294ec8df70bd38b702a07043b2422cdd2/neuro_san-0.5.43-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "baf55bb08e715eaf9611d846e3696ab5eecb7dc2238f8edbbcaa461c1e3ba6cb",
                "md5": "9ecf0d2684b0c649c4b92d9a9b5e28ec",
                "sha256": "db9927d66f6effafe9806d3735d41fd94261bf38b419524aa36cd013cb203ac8"
            },
            "downloads": -1,
            "filename": "neuro_san-0.5.43.tar.gz",
            "has_sig": false,
            "md5_digest": "9ecf0d2684b0c649c4b92d9a9b5e28ec",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 921508,
            "upload_time": "2025-07-11T17:49:53",
            "upload_time_iso_8601": "2025-07-11T17:49:53.154886Z",
            "url": "https://files.pythonhosted.org/packages/ba/f5/5bb08e715eaf9611d846e3696ab5eecb7dc2238f8edbbcaa461c1e3ba6cb/neuro_san-0.5.43.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-11 17:49:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cognizant-ai-lab",
    "github_project": "neuro-san#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "leaf-common",
            "specs": [
                [
                    ">=",
                    "1.2.23"
                ]
            ]
        },
        {
            "name": "leaf-server-common",
            "specs": [
                [
                    ">=",
                    "0.1.20"
                ]
            ]
        },
        {
            "name": "grpcio",
            "specs": [
                [
                    ">=",
                    "1.62.0"
                ]
            ]
        },
        {
            "name": "grpcio-health-checking",
            "specs": [
                [
                    ">=",
                    "1.62.0"
                ]
            ]
        },
        {
            "name": "grpcio-reflection",
            "specs": [
                [
                    ">=",
                    "1.62.0"
                ]
            ]
        },
        {
            "name": "grpcio-tools",
            "specs": [
                [
                    ">=",
                    "1.62.0"
                ]
            ]
        },
        {
            "name": "protobuf",
            "specs": [
                [
                    "<",
                    "5.0"
                ],
                [
                    ">=",
                    "4.25.3"
                ]
            ]
        },
        {
            "name": "pyhocon",
            "specs": [
                [
                    ">=",
                    "0.3.60"
                ]
            ]
        },
        {
            "name": "pyOpenSSL",
            "specs": [
                [
                    ">=",
                    "24.0.0"
                ]
            ]
        },
        {
            "name": "boto3",
            "specs": [
                [
                    ">=",
                    "1.34.51"
                ]
            ]
        },
        {
            "name": "botocore",
            "specs": [
                [
                    ">=",
                    "1.34.51"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    ">=",
                    "3.6"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    ">=",
                    "1.26.18"
                ]
            ]
        },
        {
            "name": "aiohttp",
            "specs": [
                [
                    ">=",
                    "3.10.5"
                ],
                [
                    "<",
                    "4.0"
                ]
            ]
        },
        {
            "name": "ruamel.yaml",
            "specs": [
                [
                    ">=",
                    "0.18.6"
                ]
            ]
        },
        {
            "name": "hvac",
            "specs": [
                [
                    ">=",
                    "1.1.0"
                ]
            ]
        },
        {
            "name": "langchain",
            "specs": [
                [
                    ">=",
                    "0.3.15"
                ],
                [
                    "<",
                    "0.4"
                ]
            ]
        },
        {
            "name": "langchain-anthropic",
            "specs": [
                [
                    ">=",
                    "0.3.11"
                ],
                [
                    "<",
                    "0.4"
                ]
            ]
        },
        {
            "name": "langchain-community",
            "specs": [
                [
                    ">=",
                    "0.3.19"
                ],
                [
                    "<",
                    "0.4"
                ]
            ]
        },
        {
            "name": "langchain-google-genai",
            "specs": [
                [
                    ">=",
                    "2.0.11"
                ],
                [
                    "<",
                    "3.0"
                ]
            ]
        },
        {
            "name": "langchain-openai",
            "specs": [
                [
                    "<",
                    "0.4"
                ],
                [
                    ">=",
                    "0.2.5"
                ]
            ]
        },
        {
            "name": "langchain-nvidia-ai-endpoints",
            "specs": [
                [
                    ">=",
                    "0.3.8"
                ],
                [
                    "<",
                    "0.4"
                ]
            ]
        },
        {
            "name": "langchain-ollama",
            "specs": [
                [
                    "<",
                    "0.3"
                ],
                [
                    ">=",
                    "0.2.3"
                ]
            ]
        },
        {
            "name": "openai",
            "specs": [
                [
                    ">=",
                    "1.54.1"
                ],
                [
                    "<",
                    "2.0"
                ]
            ]
        },
        {
            "name": "tiktoken",
            "specs": [
                [
                    "<",
                    "1.0"
                ],
                [
                    ">=",
                    "0.8.0"
                ]
            ]
        },
        {
            "name": "bs4",
            "specs": [
                [
                    ">=",
                    "0.0.2"
                ],
                [
                    "<",
                    "0.1"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "<",
                    "3.0"
                ],
                [
                    ">=",
                    "2.9.2"
                ]
            ]
        },
        {
            "name": "httpx",
            "specs": [
                [
                    "==",
                    "0.27.2"
                ]
            ]
        },
        {
            "name": "tornado",
            "specs": [
                [
                    ">=",
                    "6.4.2"
                ]
            ]
        },
        {
            "name": "janus",
            "specs": [
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "watchdog",
            "specs": [
                [
                    ">=",
                    "6.0.0"
                ]
            ]
        },
        {
            "name": "validators",
            "specs": [
                [
                    ">=",
                    "0.22.0"
                ]
            ]
        },
        {
            "name": "timedinput",
            "specs": [
                [
                    ">=",
                    "0.1.1"
                ]
            ]
        },
        {
            "name": "duckduckgo_search",
            "specs": [
                [
                    ">=",
                    "7.3.0"
                ]
            ]
        },
        {
            "name": "json-repair",
            "specs": [
                [
                    "<",
                    "1.0"
                ],
                [
                    ">=",
                    "0.47.3"
                ]
            ]
        }
    ],
    "lcname": "neuro-san"
}
        
Elapsed time: 0.42429s