nimbusagent


Namenimbusagent JSON
Version 0.6.0 PyPI version JSON
download
home_pageNone
SummaryAn OpenAI agent with basic memory, functions, and moderation support
upload_time2024-04-20 10:59:38
maintainerNone
docs_urlNone
authorLee Huffman
requires_python>=3.8
licenseMIT License Copyright (c) 2023 Vaisala Xweather Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords openai gpt3 gpt3.5 gpt4 ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## Overview

NimbusAgent is a Python module that provides an interface for interacting with OpenAI's GPT models, including GPT-4 and
GPT-3.5 Turbo. The module is designed to facilitate easy integration of advanced AI capabilities into applications,
offering features like automatic moderation, memory management, and support for both streaming and non-streaming
responses.

## Features

- Integration with OpenAI GPT models (GPT-4, GPT-3.5 Turbo)
- Automated content moderation
- Customizable AI responses with adjustable parameters
- Support for both streaming and non-streaming response handling
- Internal memory management for maintaining conversation context
- Extensible function handling for custom AI functionalities

## Installation

To install NimbusAgent, run the following command:

```bash
pip install nimbusagent
```

## Usage

### CompletionAgent

First, import the necessary classes from the nimbusagent package, create an instance of the CompletionAgent class,
passing in your OpenAI API key and the name of the model you want to use:

```python
from nimbusagent.agent.completion import CompletionAgent

agent = CompletionAgent(
    openai_api_key="YOUR_OPENAI_API_KEY",
    model_name="gpt-4-0613",
    max_tokens=2000
)

response = agent.ask("What's the weather like today?")
print(response)
```

### StreamingAgent

If you want to use a streaming agent, create an instance of the StreamingAgent class instead. When using a streaming
agent, you can use the ask() method to send a prompt to the agent, and then iterate over the response to get the chunks
of text as they are generated:

```python
from nimbusagent.agent.completion import StreamingAgent

agent = StreamingAgent(
    openai_api_key="YOUR_OPENAI_API_KEY",
    model_name="gpt-4-0613",
    max_tokens=2000
)
response = agent.ask("What's the weather like today?")
for chunk in response:
    print(chunk)
```

### Configuration Parameters

When initializing an instance of `BaseAgent`, `CompletionAgent`, or `StreamingAgent`, several configuration parameters
can be passed to customize the agent's behavior. Below is a detailed description of these parameters:

#### `openai_api_key`

- **Description**: The API key for accessing OpenAI services.
- **Type**: `str`
- **Default**: `None` (The system will look for an environment variable `OPENAI_API_KEY` if not provided)

#### `model_name`

- **Description**: The name of the primary OpenAI GPT model to use.
- **Type**: `str`
- **Default**: `'gpt-4-0613'`

#### `secondary_model_name`

- **Description**: The name of the secondary OpenAI GPT model to use. The secondary model may be requested by a custom
  function to provide additional context for the AI, such as to rephrase a common response to it is custom. For these
  simpler responses you may choose to use a cheaper model, such as `gpt-3.5-turbo`.
- **Type**: `str`
- **Default**: `'gpt-3.5-turbo'`

#### `temperature`

- **Description**: Controls the randomness of the AI's responses.
- **Type**: `float`
- **Default**: `0.1`

#### `max_tokens`

- **Description**: The maximum number of tokens to generate in each response.
- **Type**: `int`
- **Default**: `500`

#### `functions`

- **Description**: A list of custom functions that the agent can use.
- **Type**: `Optional[list]`
- **Default**: `None`

#### `functions_embeddings`

- **Description**: Embeddings for the functions to help the AI understand them better.
- **Type**: `Optional[List[dict]]`
- **Default**: `None`

#### `functions_embeddings_model`

- **Description**: The name of the OpenAI model to use for generating embeddings for the functions.
- **Type**: `str`
- **Default**: `'text-embedding-ada-002'`

#### `functions_always_use`

- **Description**: Functions that should always be used by the agent.
- **Type**: `Optional[List[str]]`
- **Default**: `None`

#### `functions_pattern_groups`

- **Description**: Pattern groups for matching functions to user queries.
- **Type**: `Optional[List[dict]]`
- **Default**: `None`

#### `functions_k_closest`

- **Description**: The number of closest functions to consider when handling a query.
- **Type**: `int`
- **Default**: `3`

#### `functions_min_similarity`

- **Description**: The minimum similarity score for a function to be considered when handling a query.
- **Type**: `float`
- **Default**: `0.5`

#### `function_max_tokens`

- **Description**: The maximum number of tokens to allow towards function definitions. This is useful for preventing
  using a large number of tokens from function definitions, thus lowering costs and preventing AI errors. Set to 0 for
  unlimited token usage
- **Type**: `int`
- **Default**: `2000`

#### `use_tool_calls`

- **Description**: Whether to use the new OpenAI Tool Calls vs the now deprecated Function calls
- **Type**: `bool`
- **Default**: `True`

#### `system_message`

- **Description**: A system message that sets the context for the agent.
- **Type**: `str`
- **Default**: `"You are a helpful assistant."`

#### `message_history`

- **Description**: Pre-existing chat history for the agent to consider.
- **Type**: `Optional[List[dict]]`
- **Default**: `None`

#### `calling_function_start_callback` and `calling_function_stop_callback`

- **Description**: Callback functions triggered when a custom function starts or stops.
- **Type**: `Optional[callable]`
- **Default**: `None`

#### `perform_moderation`

- **Description**: Whether the agent should moderate the responses for inappropriate content.
- **Type**: `bool`
- **Default**: `True`

#### `moderation_fail_message`

- **Description**: The message returned when moderation fails.
- **Type**: `str`
- **Default**: `"I'm sorry, I can't help you with that as it is not appropriate."`

#### `memory_max_entries` and `memory_max_tokens`

- **Description**: Limits for the agent's memory in terms of entries and tokens.
- **Types**: `int`
- **Defaults**: `20` for `memory_max_entries`, `2000` for `memory_max_tokens`

#### `internal_thoughts_max_entries`

- **Description**: The maximum number of entries for the agent's internal thoughts.
- **Type**: `int`
- **Default**: `8`

#### `loops_max`

- **Description**: The maximum number of loops the agent will process for a single query.
- **Type**: `int`
- **Default**: `8`

#### `send_events`

- **Description**: Whether the agent should send events (useful for streaming responses).
- **Type**: `bool`
- **Default**: `False`

#### `max_event_size`

- **Description**: The maximum size of an event in bytes. Allows limiting sending large data streams from a function
  response
- **Type**: `int`
- **Default**: `2000`

#### `on_complete`

- **Description**: Callback function triggered when the agent completes a response. The response is passed as an
  argument (string)
  to the callback.
- **Type**: `Optional[callable]`
- **Default**: `None`

### Example of Initialization

Here's an example of how you might initialize a `CompletionAgent` with some of these parameters:

```python
agent = CompletionAgent(
    openai_api_key="your_api_key_here",
    model_name="gpt-4-0613",
    temperature=0.7,
    max_tokens=500,
    perform_moderation=True,
    system_message="You are a helpful assistant."
)
```

You can customize these parameters based on the specific requirements of your application or the behaviors you expect
from the agent.

### Advanced Usage and Examples

- For more advanced use cases such as handling multi-turn conversations or integrating custom AI functionalities, refer
  to the 'examples' directory in our repository.

## Getting Support

- For support, please open an issue in our GitHub repository.

## License

- NimbusAgent is released under the MIT License See the LICENSE file for more details.

## Todo

- [ ] Add support for Azure OpenAI API
- [ ] Add support for OpenAI Assistant API
- [x] Add support for new OpenAI Tool Calls vs now deprecated Function calls
- [x] Add Function call examples

## Stay Updated

- Follow our GitHub repository to stay updated with new releases and changes.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nimbusagent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "openai, gpt3, gpt3.5, gpt4, ai",
    "author": "Lee Huffman",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/e2/fa/843f545e07924d78fe5d61bbc31fb1e935af531ed617cc13f38e2611ae97/nimbusagent-0.6.0.tar.gz",
    "platform": null,
    "description": "## Overview\n\nNimbusAgent is a Python module that provides an interface for interacting with OpenAI's GPT models, including GPT-4 and\nGPT-3.5 Turbo. The module is designed to facilitate easy integration of advanced AI capabilities into applications,\noffering features like automatic moderation, memory management, and support for both streaming and non-streaming\nresponses.\n\n## Features\n\n- Integration with OpenAI GPT models (GPT-4, GPT-3.5 Turbo)\n- Automated content moderation\n- Customizable AI responses with adjustable parameters\n- Support for both streaming and non-streaming response handling\n- Internal memory management for maintaining conversation context\n- Extensible function handling for custom AI functionalities\n\n## Installation\n\nTo install NimbusAgent, run the following command:\n\n```bash\npip install nimbusagent\n```\n\n## Usage\n\n### CompletionAgent\n\nFirst, import the necessary classes from the nimbusagent package, create an instance of the CompletionAgent class,\npassing in your OpenAI API key and the name of the model you want to use:\n\n```python\nfrom nimbusagent.agent.completion import CompletionAgent\n\nagent = CompletionAgent(\n    openai_api_key=\"YOUR_OPENAI_API_KEY\",\n    model_name=\"gpt-4-0613\",\n    max_tokens=2000\n)\n\nresponse = agent.ask(\"What's the weather like today?\")\nprint(response)\n```\n\n### StreamingAgent\n\nIf you want to use a streaming agent, create an instance of the StreamingAgent class instead. When using a streaming\nagent, you can use the ask() method to send a prompt to the agent, and then iterate over the response to get the chunks\nof text as they are generated:\n\n```python\nfrom nimbusagent.agent.completion import StreamingAgent\n\nagent = StreamingAgent(\n    openai_api_key=\"YOUR_OPENAI_API_KEY\",\n    model_name=\"gpt-4-0613\",\n    max_tokens=2000\n)\nresponse = agent.ask(\"What's the weather like today?\")\nfor chunk in response:\n    print(chunk)\n```\n\n### Configuration Parameters\n\nWhen initializing an instance of `BaseAgent`, `CompletionAgent`, or `StreamingAgent`, several configuration parameters\ncan be passed to customize the agent's behavior. Below is a detailed description of these parameters:\n\n#### `openai_api_key`\n\n- **Description**: The API key for accessing OpenAI services.\n- **Type**: `str`\n- **Default**: `None` (The system will look for an environment variable `OPENAI_API_KEY` if not provided)\n\n#### `model_name`\n\n- **Description**: The name of the primary OpenAI GPT model to use.\n- **Type**: `str`\n- **Default**: `'gpt-4-0613'`\n\n#### `secondary_model_name`\n\n- **Description**: The name of the secondary OpenAI GPT model to use. The secondary model may be requested by a custom\n  function to provide additional context for the AI, such as to rephrase a common response to it is custom. For these\n  simpler responses you may choose to use a cheaper model, such as `gpt-3.5-turbo`.\n- **Type**: `str`\n- **Default**: `'gpt-3.5-turbo'`\n\n#### `temperature`\n\n- **Description**: Controls the randomness of the AI's responses.\n- **Type**: `float`\n- **Default**: `0.1`\n\n#### `max_tokens`\n\n- **Description**: The maximum number of tokens to generate in each response.\n- **Type**: `int`\n- **Default**: `500`\n\n#### `functions`\n\n- **Description**: A list of custom functions that the agent can use.\n- **Type**: `Optional[list]`\n- **Default**: `None`\n\n#### `functions_embeddings`\n\n- **Description**: Embeddings for the functions to help the AI understand them better.\n- **Type**: `Optional[List[dict]]`\n- **Default**: `None`\n\n#### `functions_embeddings_model`\n\n- **Description**: The name of the OpenAI model to use for generating embeddings for the functions.\n- **Type**: `str`\n- **Default**: `'text-embedding-ada-002'`\n\n#### `functions_always_use`\n\n- **Description**: Functions that should always be used by the agent.\n- **Type**: `Optional[List[str]]`\n- **Default**: `None`\n\n#### `functions_pattern_groups`\n\n- **Description**: Pattern groups for matching functions to user queries.\n- **Type**: `Optional[List[dict]]`\n- **Default**: `None`\n\n#### `functions_k_closest`\n\n- **Description**: The number of closest functions to consider when handling a query.\n- **Type**: `int`\n- **Default**: `3`\n\n#### `functions_min_similarity`\n\n- **Description**: The minimum similarity score for a function to be considered when handling a query.\n- **Type**: `float`\n- **Default**: `0.5`\n\n#### `function_max_tokens`\n\n- **Description**: The maximum number of tokens to allow towards function definitions. This is useful for preventing\n  using a large number of tokens from function definitions, thus lowering costs and preventing AI errors. Set to 0 for\n  unlimited token usage\n- **Type**: `int`\n- **Default**: `2000`\n\n#### `use_tool_calls`\n\n- **Description**: Whether to use the new OpenAI Tool Calls vs the now deprecated Function calls\n- **Type**: `bool`\n- **Default**: `True`\n\n#### `system_message`\n\n- **Description**: A system message that sets the context for the agent.\n- **Type**: `str`\n- **Default**: `\"You are a helpful assistant.\"`\n\n#### `message_history`\n\n- **Description**: Pre-existing chat history for the agent to consider.\n- **Type**: `Optional[List[dict]]`\n- **Default**: `None`\n\n#### `calling_function_start_callback` and `calling_function_stop_callback`\n\n- **Description**: Callback functions triggered when a custom function starts or stops.\n- **Type**: `Optional[callable]`\n- **Default**: `None`\n\n#### `perform_moderation`\n\n- **Description**: Whether the agent should moderate the responses for inappropriate content.\n- **Type**: `bool`\n- **Default**: `True`\n\n#### `moderation_fail_message`\n\n- **Description**: The message returned when moderation fails.\n- **Type**: `str`\n- **Default**: `\"I'm sorry, I can't help you with that as it is not appropriate.\"`\n\n#### `memory_max_entries` and `memory_max_tokens`\n\n- **Description**: Limits for the agent's memory in terms of entries and tokens.\n- **Types**: `int`\n- **Defaults**: `20` for `memory_max_entries`, `2000` for `memory_max_tokens`\n\n#### `internal_thoughts_max_entries`\n\n- **Description**: The maximum number of entries for the agent's internal thoughts.\n- **Type**: `int`\n- **Default**: `8`\n\n#### `loops_max`\n\n- **Description**: The maximum number of loops the agent will process for a single query.\n- **Type**: `int`\n- **Default**: `8`\n\n#### `send_events`\n\n- **Description**: Whether the agent should send events (useful for streaming responses).\n- **Type**: `bool`\n- **Default**: `False`\n\n#### `max_event_size`\n\n- **Description**: The maximum size of an event in bytes. Allows limiting sending large data streams from a function\n  response\n- **Type**: `int`\n- **Default**: `2000`\n\n#### `on_complete`\n\n- **Description**: Callback function triggered when the agent completes a response. The response is passed as an\n  argument (string)\n  to the callback.\n- **Type**: `Optional[callable]`\n- **Default**: `None`\n\n### Example of Initialization\n\nHere's an example of how you might initialize a `CompletionAgent` with some of these parameters:\n\n```python\nagent = CompletionAgent(\n    openai_api_key=\"your_api_key_here\",\n    model_name=\"gpt-4-0613\",\n    temperature=0.7,\n    max_tokens=500,\n    perform_moderation=True,\n    system_message=\"You are a helpful assistant.\"\n)\n```\n\nYou can customize these parameters based on the specific requirements of your application or the behaviors you expect\nfrom the agent.\n\n### Advanced Usage and Examples\n\n- For more advanced use cases such as handling multi-turn conversations or integrating custom AI functionalities, refer\n  to the 'examples' directory in our repository.\n\n## Getting Support\n\n- For support, please open an issue in our GitHub repository.\n\n## License\n\n- NimbusAgent is released under the MIT License See the LICENSE file for more details.\n\n## Todo\n\n- [ ] Add support for Azure OpenAI API\n- [ ] Add support for OpenAI Assistant API\n- [x] Add support for new OpenAI Tool Calls vs now deprecated Function calls\n- [x] Add Function call examples\n\n## Stay Updated\n\n- Follow our GitHub repository to stay updated with new releases and changes.\n\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 Vaisala Xweather  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "An OpenAI agent with basic memory, functions, and moderation support",
    "version": "0.6.0",
    "project_urls": {
        "homepage": "https://github.com/aerisweather/nimbusagent",
        "repository": "https://github.com/aerisweather/nimbusagent"
    },
    "split_keywords": [
        "openai",
        " gpt3",
        " gpt3.5",
        " gpt4",
        " ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "53781418cf83ce2756ff89e335812b7ab6fbea801ee4add97c5eb3b03595187f",
                "md5": "b047638674f03aa3fc8f3877b0b0ffe5",
                "sha256": "9f6951dedad752a62f485703e7aada6a91b5228bc6388620585f1844e4fa47c5"
            },
            "downloads": -1,
            "filename": "nimbusagent-0.6.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b047638674f03aa3fc8f3877b0b0ffe5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 24617,
            "upload_time": "2024-04-20T10:59:37",
            "upload_time_iso_8601": "2024-04-20T10:59:37.025631Z",
            "url": "https://files.pythonhosted.org/packages/53/78/1418cf83ce2756ff89e335812b7ab6fbea801ee4add97c5eb3b03595187f/nimbusagent-0.6.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e2fa843f545e07924d78fe5d61bbc31fb1e935af531ed617cc13f38e2611ae97",
                "md5": "16ff6cbda6c8a8077f45af7d7821a2c9",
                "sha256": "7df33c71bd854ca6144ef24b847504c319460e0b7a788846306922dab2aa9303"
            },
            "downloads": -1,
            "filename": "nimbusagent-0.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "16ff6cbda6c8a8077f45af7d7821a2c9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 32911,
            "upload_time": "2024-04-20T10:59:38",
            "upload_time_iso_8601": "2024-04-20T10:59:38.665572Z",
            "url": "https://files.pythonhosted.org/packages/e2/fa/843f545e07924d78fe5d61bbc31fb1e935af531ed617cc13f38e2611ae97/nimbusagent-0.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-20 10:59:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "aerisweather",
    "github_project": "nimbusagent",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "nimbusagent"
}
        
Elapsed time: 0.24826s