vectara-agentic


Namevectara-agentic JSON
Version 0.1.19 PyPI version JSON
download
home_pagehttps://github.com/vectara/py-vectara-agentic
SummaryA Python package for creating AI Assistants and AI Agents with Vectara
upload_time2024-11-08 00:47:43
maintainerNone
docs_urlNone
authorOfer Mendelevitch
requires_python>=3.10
licenseNone
keywords llm nlp rag agentic-rag
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <img src="https://raw.githubusercontent.com/vectara/py-vectara-agentic/main/.github/assets/Vectara-logo.png" alt="Vectara Logo" width="30" height="30" style="vertical-align: middle;"> vectara-agentic

<p align="center">
  <a href="https://vectara.github.io/vectara-agentic-docs">Documentation</a> ยท
  <a href="#examples">Examples</a> ยท
  <a href="https://discord.gg/S9dwgCNEFs">Discord</a>
</p>

<p align="center">
  <a href="https://opensource.org/licenses/Apache-2.0">
    <img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License">
  </a>
  <a href="https://github.com/vectara/py-vectara-agentic/graphs/commit-activity">
    <img src="https://img.shields.io/badge/Maintained%3F-yes-green.svg" alt="Maintained">
  </a>
  <a href="https://twitter.com/vectara">
    <img src="https://img.shields.io/twitter/follow/vectara.svg?style=social&label=Follow%20%40Vectara" alt="Twitter">
  </a>
</p>

## โœจ Overview

`vectara-agentic` is a Python library for developing powerful AI assistants and agents using Vectara and Agentic-RAG. It leverages the LlamaIndex Agent framework, customized for use with Vectara.

<p align="center">
<img src="https://raw.githubusercontent.com/vectara/py-vectara-agentic/main/.github/assets/diagram1.png" alt="Agentic RAG diagram" width="100%" style="vertical-align: middle;">
</p>

###  Features

- Enables easy creation of custom AI assistants and agents.
- Create a Vectara RAG tool with a single line of code.
- Supports `ReAct`, `OpenAIAgent` and `LLMCompiler` agent types.
- Includes pre-built tools for various domains (e.g., finance, legal).
- Integrates with various LLM inference services like OpenAI, Anthropic, Gemini, GROQ, Together.AI, Cohere and Fireworks
- Built-in support for observability with Arize Phoenix

### ๐Ÿ“š Example AI Assistants

Check out our example AI assistants:

- [Financial Assistant](https://huggingface.co/spaces/vectara/finance-chat)
- [Justice Harvard Teaching Assistant](https://huggingface.co/spaces/vectara/Justice-Harvard)
- [Legal Assistant](https://huggingface.co/spaces/vectara/legal-agent)


###  Prerequisites

- [Vectara account](https://console.vectara.com/signup/?utm_source=github&utm_medium=code&utm_term=DevRel&utm_content=vectara-agentic&utm_campaign=github-code-DevRel-vectara-agentic)
- A Vectara corpus with an [API key](https://docs.vectara.com/docs/api-keys)
- [Python 3.10 or higher](https://www.python.org/downloads/)
- OpenAI API key (or API keys for Anthropic, TOGETHER.AI, Fireworks AI, Cohere, GEMINI or GROQ, if you choose to use them)

###  Installation

```bash
pip install vectara-agentic
```

## ๐Ÿš€ Quick Start

### 1. Create a Vectara RAG tool

```python
import os
from vectara_agentic import VectaraToolFactory
from pydantic import BaseModel, Field

vec_factory = VectaraToolFactory(
    vectara_api_key=os.environ['VECTARA_API_KEY'],
    vectara_customer_id=os.environ['VECTARA_CUSTOMER_ID'],
    vectara_corpus_id=os.environ['VECTARA_CORPUS_ID']
)

class QueryFinancialReportsArgs(BaseModel):
    query: str = Field(..., description="The user query.")
    year: int = Field(..., description="The year. An integer between {min(years)} and {max(years)}.")
    ticker: str = Field(..., description="The company ticker. Must be a valid ticket symbol from the list {tickers.keys()}.")

query_financial_reports_tool = vec_factory.create_rag_tool(
    tool_name="query_financial_reports",
    tool_description="Query financial reports for a company and year",
    tool_args_schema=QueryFinancialReportsArgs,
)
```

### 2. Create other tools (optional)

In addition to RAG tools, you can generate a lot of other types of tools the agent can use. These could be mathematical tools, tools 
that call other APIs to get more information, or any other type of tool.

See [Agent Tools](#agent-tools) for more information.

### 3. Create your agent

```python
from vectara_agentic import Agent

agent = Agent(
    tools=[query_financial_reports_tool],
    topic="10-K financial reports",
    custom_instructions="""
        - You are a helpful financial assistant in conversation with a user. Use your financial expertise when crafting a query to the tool, to ensure you get the most accurate information.
        - You can answer questions, provide insights, or summarize any information from financial reports.
        - A user may refer to a company's ticker instead of its full name - consider those the same when a user is asking about a company.
        - When calculating a financial metric, make sure you have all the information from tools to complete the calculation.
        - In many cases you may need to query tools on each sub-metric separately before computing the final metric.
        - When using a tool to obtain financial data, consider the fact that information for a certain year may be reported in the following year's report.
        - Report financial data in a consistent manner. For example if you report revenue in thousands, always report revenue in thousands.
    """
)
```

### 4. Run your agent

```python
response = agent.chat("What was the revenue for Apple in 2021?")
print(response)
```

## ๐Ÿ› ๏ธ Agent Tools

`vectara-agentic` provides a few tools out of the box:
1. **Standard tools**: 
- `summarize_text`: a tool to summarize a long text into a shorter summary (uses LLM)
- `rephrase_text`: a tool to rephrase a given text, given a set of rephrase instructions (uses LLM)
  
2. **Legal tools**: a set of tools for the legal vertical, such as:
- `summarize_legal_text`: summarize legal text with a certain point of view
- `critique_as_judge`: critique a legal text as a judge, providing their perspective

3. **Financial tools**: based on tools from Yahoo! Finance:
- tools to understand the financials of a public company like: `balance_sheet`, `income_statement`, `cash_flow`
- `stock_news`: provides news about a company
- `stock_analyst_recommendations`: provides stock analyst recommendations for a company.

4. **Database tools**: providing tools to inspect and query a database
- `list_tables`: list all tables in the database
- `describe_tables`: describe the schema of tables in the database
- `load_data`: returns data based on a SQL query
- `load_sample_data`: returns the first 25 rows of a table
- `load_unique_values`: returns the top unique values for a given column

In addition, we include various other tools from LlamaIndex ToolSpecs:
* Tavily search
* arxiv
* neo4j
* Google tools (including gmail, calendar, and search)
* Slack

Note that some of these tools may require API keys as environment variables

You can create your own tool directly from a Python function using the `create_tool()` method of the `ToolsFactory` class:

```python
def mult_func(x, y):
    return x * y

mult_tool = ToolsFactory().create_tool(mult_func)
```

## ๐Ÿ› ๏ธ Configuration

Configure `vectara-agentic` using environment variables:

- `VECTARA_AGENTIC_AGENT_TYPE`: valid values are `REACT`, `LLMCOMPILER` or `OPENAI` (default: `OPENAI`)
- `VECTARA_AGENTIC_MAIN_LLM_PROVIDER`: valid values are `OPENAI`, `ANTHROPIC`, `TOGETHER`, `GROQ`, `COHERE`, `GEMINI` or `FIREWORKS` (default: `OPENAI`)
- `VECTARA_AGENTIC_MAIN_MODEL_NAME`: agent model name (default depends on provider)
- `VECTARA_AGENTIC_TOOL_LLM_PROVIDER`: tool LLM provider (default: `OPENAI`)
- `VECTARA_AGENTIC_TOOL_MODEL_NAME`: tool model name (default depends on provider)
- `VECTARA_AGENTIC_OBSERVER_TYPE`: valid values are `ARIZE_PHOENIX` or `NONE` (default: `NONE`)

When creating a VectaraToolFactory, you can pass in a `vectara_api_key`, `vectara_customer_id`, and `vectara_corpus_id` to the factory. If not passed in, it will be taken from the environment variables. Note that `VECTARA_CORPUS_ID` can be a single ID or a comma-separated list of IDs (if you want to query multiple corpora).

## โ„น๏ธ Additional Information

### About Custom Instructions for your Agent

The custom instructions you provide to the agent guide its behavior.
Here are some guidelines when creating your instructions:
- Write precise and clear instructions, without overcomplicating.
- Consider edge cases and unusual or atypical scenarios.
- Be cautious to not over-specify behavior based on your primary use-case, as it may limit the agent's ability to behave properly in others.

###  Diagnostics

The `Agent` class defines a few helpful methods to help you understand the internals of your application. 
* The `report()` method prints out the agent objectโ€™s type, the tools, and the LLMs used for the main agent and tool calling.
* The `token_counts()` method tells you how many tokens you have used in the current session for both the main agent and tool calling LLMs. This can be helpful if you want to track spend by token.

###  Serialization

The `Agent` class supports serialization. Use the `dumps()` to serialize and `loads()` to read back from a serialized stream.

###  Observability

vectara-agentic supports observability via the existing integration of LlamaIndex and Arize Phoenix.
First, set `os["VECTARA_AGENTIC_OBSERVER_TYPE"] = "ARIZE_PHOENIX"`.
Then you can use Arize Phoenix in three ways: 
1. **Locally**. 
   1. If you have a local phoenix server that you've run using e.g. `python -m phoenix.server.main serve`, vectara-agentic will send all traces to it.
   2. If not, vectara-agentic will run a local instance during the agent's lifecycle, and will close it when finished.
   3. In both cases, traces will be sent to the local instance, and you can see the dashboard at `http://localhost:6006`
2. **Hosted Instance**. In this case the traces are sent to the Phoenix instances hosted on Arize.
   1. Go to `https://app.phoenix.arize.com`, setup an account if you don't have one.
   2. create an API key and put it in the `PHOENIX_API_KEY` variable. This variable indicates you want to use the hosted version.
   3. To view the traces go to `https://app.phoenix.arize.com`.

Now when you run your agent, all call traces are sent to Phoenix and recorded. 
In addition, vectara-agentic also records `FCS` (factual consistency score, aka HHEM) values into Arize for every Vectara RAG call. You can see those results in the `Feedback` column of the arize UI.

## ๐ŸŒ API Endpoint

`vectara-agentic` can be easily hosted locally or on a remote machine behind an API endpoint, by following theses steps:

### Step 1: Setup your API key
Ensure that you have your API key set up as an environment variable:

```
export VECTARA_AGENTIC_API_KEY=<YOUR-ENDPOINT-API-KEY>
```

if you don't specify an Endpoint API key it uses the default "dev-api-key".

### Step 2: Start the API Server
Initialize the agent and start the FastAPI server by following this example:


```
from vectara_agentic.agent import Agent
from vectara_agentic.agent_endpoint import start_app
agent = Agent(...)            # Initialize your agent with appropriate parameters
start_app(agent)
```

You can customize the host and port by passing them as arguments to `start_app()`:
* Default: host="0.0.0.0" and port=8000.
For example:
```
start_app(agent, host="0.0.0.0", port=8000)
```

### Step 3: Access the API Endpoint
Once the server is running, you can interact with it using curl or any HTTP client. For example:

```
curl -G "http://<remote-server-ip>:8000/chat" \
--data-urlencode "message=What is Vectara?" \
-H "X-API-Key: <YOUR-ENDPOINT-API-KEY>"
```

## ๐Ÿค Contributing

We welcome contributions! Please see our [contributing guide](https://github.com/vectara/py-vectara-agentic/blob/main/CONTRIBUTING.md) for more information.

## ๐Ÿ“ License

This project is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/vectara/py-vectara-agentic/blob/master/LICENSE) file for details.

## ๐Ÿ“ž Contact

- Website: [vectara.com](https://vectara.com)
- Twitter: [@vectara](https://twitter.com/vectara)
- GitHub: [@vectara](https://github.com/vectara)
- LinkedIn: [@vectara](https://www.linkedin.com/company/vectara/)
- Discord: [Join our community](https://discord.gg/GFb8gMz6UH)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/vectara/py-vectara-agentic",
    "name": "vectara-agentic",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "LLM, NLP, RAG, Agentic-RAG",
    "author": "Ofer Mendelevitch",
    "author_email": "ofer@vectara.com",
    "download_url": "https://files.pythonhosted.org/packages/13/70/0e11a5c4a5698a34af4aafbda02b9a3c0213b6b790279724cba579971a05/vectara_agentic-0.1.19.tar.gz",
    "platform": null,
    "description": "# <img src=\"https://raw.githubusercontent.com/vectara/py-vectara-agentic/main/.github/assets/Vectara-logo.png\" alt=\"Vectara Logo\" width=\"30\" height=\"30\" style=\"vertical-align: middle;\"> vectara-agentic\n\n<p align=\"center\">\n  <a href=\"https://vectara.github.io/vectara-agentic-docs\">Documentation</a> \u00b7\n  <a href=\"#examples\">Examples</a> \u00b7\n  <a href=\"https://discord.gg/S9dwgCNEFs\">Discord</a>\n</p>\n\n<p align=\"center\">\n  <a href=\"https://opensource.org/licenses/Apache-2.0\">\n    <img src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\" alt=\"License\">\n  </a>\n  <a href=\"https://github.com/vectara/py-vectara-agentic/graphs/commit-activity\">\n    <img src=\"https://img.shields.io/badge/Maintained%3F-yes-green.svg\" alt=\"Maintained\">\n  </a>\n  <a href=\"https://twitter.com/vectara\">\n    <img src=\"https://img.shields.io/twitter/follow/vectara.svg?style=social&label=Follow%20%40Vectara\" alt=\"Twitter\">\n  </a>\n</p>\n\n## \u2728 Overview\n\n`vectara-agentic` is a Python library for developing powerful AI assistants and agents using Vectara and Agentic-RAG. It leverages the LlamaIndex Agent framework, customized for use with Vectara.\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/vectara/py-vectara-agentic/main/.github/assets/diagram1.png\" alt=\"Agentic RAG diagram\" width=\"100%\" style=\"vertical-align: middle;\">\n</p>\n\n###  Features\n\n- Enables easy creation of custom AI assistants and agents.\n- Create a Vectara RAG tool with a single line of code.\n- Supports `ReAct`, `OpenAIAgent` and `LLMCompiler` agent types.\n- Includes pre-built tools for various domains (e.g., finance, legal).\n- Integrates with various LLM inference services like OpenAI, Anthropic, Gemini, GROQ, Together.AI, Cohere and Fireworks\n- Built-in support for observability with Arize Phoenix\n\n### \ud83d\udcda Example AI Assistants\n\nCheck out our example AI assistants:\n\n- [Financial Assistant](https://huggingface.co/spaces/vectara/finance-chat)\n- [Justice Harvard Teaching Assistant](https://huggingface.co/spaces/vectara/Justice-Harvard)\n- [Legal Assistant](https://huggingface.co/spaces/vectara/legal-agent)\n\n\n###  Prerequisites\n\n- [Vectara account](https://console.vectara.com/signup/?utm_source=github&utm_medium=code&utm_term=DevRel&utm_content=vectara-agentic&utm_campaign=github-code-DevRel-vectara-agentic)\n- A Vectara corpus with an [API key](https://docs.vectara.com/docs/api-keys)\n- [Python 3.10 or higher](https://www.python.org/downloads/)\n- OpenAI API key (or API keys for Anthropic, TOGETHER.AI, Fireworks AI, Cohere, GEMINI or GROQ, if you choose to use them)\n\n###  Installation\n\n```bash\npip install vectara-agentic\n```\n\n## \ud83d\ude80 Quick Start\n\n### 1. Create a Vectara RAG tool\n\n```python\nimport os\nfrom vectara_agentic import VectaraToolFactory\nfrom pydantic import BaseModel, Field\n\nvec_factory = VectaraToolFactory(\n    vectara_api_key=os.environ['VECTARA_API_KEY'],\n    vectara_customer_id=os.environ['VECTARA_CUSTOMER_ID'],\n    vectara_corpus_id=os.environ['VECTARA_CORPUS_ID']\n)\n\nclass QueryFinancialReportsArgs(BaseModel):\n    query: str = Field(..., description=\"The user query.\")\n    year: int = Field(..., description=\"The year. An integer between {min(years)} and {max(years)}.\")\n    ticker: str = Field(..., description=\"The company ticker. Must be a valid ticket symbol from the list {tickers.keys()}.\")\n\nquery_financial_reports_tool = vec_factory.create_rag_tool(\n    tool_name=\"query_financial_reports\",\n    tool_description=\"Query financial reports for a company and year\",\n    tool_args_schema=QueryFinancialReportsArgs,\n)\n```\n\n### 2. Create other tools (optional)\n\nIn addition to RAG tools, you can generate a lot of other types of tools the agent can use. These could be mathematical tools, tools \nthat call other APIs to get more information, or any other type of tool.\n\nSee [Agent Tools](#agent-tools) for more information.\n\n### 3. Create your agent\n\n```python\nfrom vectara_agentic import Agent\n\nagent = Agent(\n    tools=[query_financial_reports_tool],\n    topic=\"10-K financial reports\",\n    custom_instructions=\"\"\"\n        - You are a helpful financial assistant in conversation with a user. Use your financial expertise when crafting a query to the tool, to ensure you get the most accurate information.\n        - You can answer questions, provide insights, or summarize any information from financial reports.\n        - A user may refer to a company's ticker instead of its full name - consider those the same when a user is asking about a company.\n        - When calculating a financial metric, make sure you have all the information from tools to complete the calculation.\n        - In many cases you may need to query tools on each sub-metric separately before computing the final metric.\n        - When using a tool to obtain financial data, consider the fact that information for a certain year may be reported in the following year's report.\n        - Report financial data in a consistent manner. For example if you report revenue in thousands, always report revenue in thousands.\n    \"\"\"\n)\n```\n\n### 4. Run your agent\n\n```python\nresponse = agent.chat(\"What was the revenue for Apple in 2021?\")\nprint(response)\n```\n\n## \ud83d\udee0\ufe0f Agent Tools\n\n`vectara-agentic` provides a few tools out of the box:\n1. **Standard tools**: \n- `summarize_text`: a tool to summarize a long text into a shorter summary (uses LLM)\n- `rephrase_text`: a tool to rephrase a given text, given a set of rephrase instructions (uses LLM)\n  \n2. **Legal tools**: a set of tools for the legal vertical, such as:\n- `summarize_legal_text`: summarize legal text with a certain point of view\n- `critique_as_judge`: critique a legal text as a judge, providing their perspective\n\n3. **Financial tools**: based on tools from Yahoo! Finance:\n- tools to understand the financials of a public company like: `balance_sheet`, `income_statement`, `cash_flow`\n- `stock_news`: provides news about a company\n- `stock_analyst_recommendations`: provides stock analyst recommendations for a company.\n\n4. **Database tools**: providing tools to inspect and query a database\n- `list_tables`: list all tables in the database\n- `describe_tables`: describe the schema of tables in the database\n- `load_data`: returns data based on a SQL query\n- `load_sample_data`: returns the first 25 rows of a table\n- `load_unique_values`: returns the top unique values for a given column\n\nIn addition, we include various other tools from LlamaIndex ToolSpecs:\n* Tavily search\n* arxiv\n* neo4j\n* Google tools (including gmail, calendar, and search)\n* Slack\n\nNote that some of these tools may require API keys as environment variables\n\nYou can create your own tool directly from a Python function using the `create_tool()` method of the `ToolsFactory` class:\n\n```python\ndef mult_func(x, y):\n    return x * y\n\nmult_tool = ToolsFactory().create_tool(mult_func)\n```\n\n## \ud83d\udee0\ufe0f Configuration\n\nConfigure `vectara-agentic` using environment variables:\n\n- `VECTARA_AGENTIC_AGENT_TYPE`: valid values are `REACT`, `LLMCOMPILER` or `OPENAI` (default: `OPENAI`)\n- `VECTARA_AGENTIC_MAIN_LLM_PROVIDER`: valid values are `OPENAI`, `ANTHROPIC`, `TOGETHER`, `GROQ`, `COHERE`, `GEMINI` or `FIREWORKS` (default: `OPENAI`)\n- `VECTARA_AGENTIC_MAIN_MODEL_NAME`: agent model name (default depends on provider)\n- `VECTARA_AGENTIC_TOOL_LLM_PROVIDER`: tool LLM provider (default: `OPENAI`)\n- `VECTARA_AGENTIC_TOOL_MODEL_NAME`: tool model name (default depends on provider)\n- `VECTARA_AGENTIC_OBSERVER_TYPE`: valid values are `ARIZE_PHOENIX` or `NONE` (default: `NONE`)\n\nWhen creating a VectaraToolFactory, you can pass in a `vectara_api_key`, `vectara_customer_id`, and `vectara_corpus_id` to the factory. If not passed in, it will be taken from the environment variables. Note that `VECTARA_CORPUS_ID` can be a single ID or a comma-separated list of IDs (if you want to query multiple corpora).\n\n## \u2139\ufe0f Additional Information\n\n### About Custom Instructions for your Agent\n\nThe custom instructions you provide to the agent guide its behavior.\nHere are some guidelines when creating your instructions:\n- Write precise and clear instructions, without overcomplicating.\n- Consider edge cases and unusual or atypical scenarios.\n- Be cautious to not over-specify behavior based on your primary use-case, as it may limit the agent's ability to behave properly in others.\n\n###  Diagnostics\n\nThe `Agent` class defines a few helpful methods to help you understand the internals of your application. \n* The `report()` method prints out the agent object\u2019s type, the tools, and the LLMs used for the main agent and tool calling.\n* The `token_counts()` method tells you how many tokens you have used in the current session for both the main agent and tool calling LLMs. This can be helpful if you want to track spend by token.\n\n###  Serialization\n\nThe `Agent` class supports serialization. Use the `dumps()` to serialize and `loads()` to read back from a serialized stream.\n\n###  Observability\n\nvectara-agentic supports observability via the existing integration of LlamaIndex and Arize Phoenix.\nFirst, set `os[\"VECTARA_AGENTIC_OBSERVER_TYPE\"] = \"ARIZE_PHOENIX\"`.\nThen you can use Arize Phoenix in three ways: \n1. **Locally**. \n   1. If you have a local phoenix server that you've run using e.g. `python -m phoenix.server.main serve`, vectara-agentic will send all traces to it.\n   2. If not, vectara-agentic will run a local instance during the agent's lifecycle, and will close it when finished.\n   3. In both cases, traces will be sent to the local instance, and you can see the dashboard at `http://localhost:6006`\n2. **Hosted Instance**. In this case the traces are sent to the Phoenix instances hosted on Arize.\n   1. Go to `https://app.phoenix.arize.com`, setup an account if you don't have one.\n   2. create an API key and put it in the `PHOENIX_API_KEY` variable. This variable indicates you want to use the hosted version.\n   3. To view the traces go to `https://app.phoenix.arize.com`.\n\nNow when you run your agent, all call traces are sent to Phoenix and recorded. \nIn addition, vectara-agentic also records `FCS` (factual consistency score, aka HHEM) values into Arize for every Vectara RAG call. You can see those results in the `Feedback` column of the arize UI.\n\n## \ud83c\udf10 API Endpoint\n\n`vectara-agentic` can be easily hosted locally or on a remote machine behind an API endpoint, by following theses steps:\n\n### Step 1: Setup your API key\nEnsure that you have your API key set up as an environment variable:\n\n```\nexport VECTARA_AGENTIC_API_KEY=<YOUR-ENDPOINT-API-KEY>\n```\n\nif you don't specify an Endpoint API key it uses the default \"dev-api-key\".\n\n### Step 2: Start the API Server\nInitialize the agent and start the FastAPI server by following this example:\n\n\n```\nfrom vectara_agentic.agent import Agent\nfrom vectara_agentic.agent_endpoint import start_app\nagent = Agent(...)            # Initialize your agent with appropriate parameters\nstart_app(agent)\n```\n\nYou can customize the host and port by passing them as arguments to `start_app()`:\n* Default: host=\"0.0.0.0\" and port=8000.\nFor example:\n```\nstart_app(agent, host=\"0.0.0.0\", port=8000)\n```\n\n### Step 3: Access the API Endpoint\nOnce the server is running, you can interact with it using curl or any HTTP client. For example:\n\n```\ncurl -G \"http://<remote-server-ip>:8000/chat\" \\\n--data-urlencode \"message=What is Vectara?\" \\\n-H \"X-API-Key: <YOUR-ENDPOINT-API-KEY>\"\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [contributing guide](https://github.com/vectara/py-vectara-agentic/blob/main/CONTRIBUTING.md) for more information.\n\n## \ud83d\udcdd License\n\nThis project is licensed under the Apache 2.0 License. See the [LICENSE](https://github.com/vectara/py-vectara-agentic/blob/master/LICENSE) file for details.\n\n## \ud83d\udcde Contact\n\n- Website: [vectara.com](https://vectara.com)\n- Twitter: [@vectara](https://twitter.com/vectara)\n- GitHub: [@vectara](https://github.com/vectara)\n- LinkedIn: [@vectara](https://www.linkedin.com/company/vectara/)\n- Discord: [Join our community](https://discord.gg/GFb8gMz6UH)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Python package for creating AI Assistants and AI Agents with Vectara",
    "version": "0.1.19",
    "project_urls": {
        "Documentation": "https://vectara.github.io/vectara-agentic-docs/",
        "Homepage": "https://github.com/vectara/py-vectara-agentic"
    },
    "split_keywords": [
        "llm",
        " nlp",
        " rag",
        " agentic-rag"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5d1bcf121e5189dd0c11b93ba852fb083728fd1f94ea72e41fce34403b2ed0d5",
                "md5": "6d1dda3b47e07853b5099d61d9eb8977",
                "sha256": "53d672ea1d6be0f554d429f4769ff6ddc4ba7c062cfc77abc5a2ac2d372061f9"
            },
            "downloads": -1,
            "filename": "vectara_agentic-0.1.19-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6d1dda3b47e07853b5099d61d9eb8977",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 34741,
            "upload_time": "2024-11-08T00:47:42",
            "upload_time_iso_8601": "2024-11-08T00:47:42.270657Z",
            "url": "https://files.pythonhosted.org/packages/5d/1b/cf121e5189dd0c11b93ba852fb083728fd1f94ea72e41fce34403b2ed0d5/vectara_agentic-0.1.19-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "13700e11a5c4a5698a34af4aafbda02b9a3c0213b6b790279724cba579971a05",
                "md5": "521fb4c0e36123343261b7a32dcdde05",
                "sha256": "494a662910c6421bd893ef9ec99004e1b14b8d8ea8140bf02452b17f24e703d6"
            },
            "downloads": -1,
            "filename": "vectara_agentic-0.1.19.tar.gz",
            "has_sig": false,
            "md5_digest": "521fb4c0e36123343261b7a32dcdde05",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 35496,
            "upload_time": "2024-11-08T00:47:43",
            "upload_time_iso_8601": "2024-11-08T00:47:43.472109Z",
            "url": "https://files.pythonhosted.org/packages/13/70/0e11a5c4a5698a34af4aafbda02b9a3c0213b6b790279724cba579971a05/vectara_agentic-0.1.19.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-08 00:47:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vectara",
    "github_project": "py-vectara-agentic",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "vectara-agentic"
}
        
Elapsed time: 1.03953s