aiaas-falcon


Nameaiaas-falcon JSON
Version 0.2.2 PyPI version JSON
download
home_page
SummaryThis python package help to interact with Generative AI - Large Language Models. It interacts with AIaaS LLM , AIaaS embedding , AIaaS Audio set of APIs to cater the request.
upload_time2024-01-04 09:39:33
maintainer
docs_urlNone
authorYour Name
requires_python>=3.8.1,<4.0.0
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![AIaaS Falcon Logo](img/AIAAS_FALCON.jpg)

# AIaaS Falcon


<h4 align="center">
    <p>
        <a href="#shield-installation">Installation</a> |
        <a href="#fire-quickstart">Quickstart</a> |
    <p>
</h4>


![Documentation Coverage](interrogate_badge.svg)

## Description

AIaaS_Falcon is Generative AI - LLM library interacts with different model api endpoints., allowing operations such as listing models, creating embeddings, and generating text based on certain configurations. AIaaS_Falcon helps to invoking the RAG pipeline in seconds. 

## Supported Endpoint Types:
- Azure OpenAI
- SingtelGPT
- Dev_Quantized
- Dev_Full

## :shield: Installation

Ensure you have the `requests` and `google-api-core` libraries installed:

```bash
pip install aiaas-falcon
```


if you want to install from source

```bash
git clone https://github.com/Praveengovianalytics/AIaaS_falcon && cd AIaaS_falcon
pip install -e .
```

### Methods
### `Falcon`  Class
- `__init__ (config)`
Intialise the Falcon object with endpoint configs. \
Parameter: 
    - api_key : API Key
    - api_name: Name for endpoint
    - api_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)
    - host_name_port: Host and Port Inforation
    - use_pil: Activate Personal Identifier Information Limit Protection (Boolean)
    - protocol: HTTP/ HTTPS
    - api_type: Subroute if needed
    - use_pii: Whether current endpoint need Personal Identifier Information Limit Protection
    - log_key: Auth Key to use the Application
- `current_active()`
Check current endpoint active
- `add_endpoint(api_name,protocol,host_name_port,api_endpoint,api_key,use_pil=False)`
Add a new endpoint. \
Parameter:
    - api_key : API Key
    - api_name: Name for endpoint
    - api_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)
    - host_name_port: Host and Port Inforation
    - use_pii: Activate Personal Identifier Information Limit Protection (Boolean)
    - protocol: HTTP/ HTTPS
    - use_pil: Whether current endpoint need Personal Identifier Information Limit Protection
- `list_endpoint()`
List out all endpoints in the endpoint management manager
- `set_endpoint(name)`
Set target endpoint to active \
Parameter:
    - name : Target endpoint's name
 
- `remove_endpoint(name)`
Delete endpoint by name \
Parameter:
    - name : Target endpoint's name

- `current_pii()`
Check current Personal Identifier Information Protection activation status

- `switch_pii()`
Switch current Personal Identifier Information Protection activation status
- `list_models()`
List out models available
- `initalise_pii()`
Download and intialise PII Protection. \
Note: This does not activate PII but initialise dependencies

- `health()`
Check health of current endpoint

- `create_embedding(file_path)`
Create embeddings by sending files to the API. \
Parameter:
    - file_path: Path to file 

- `generate_text_full(query="",
            context="",
            use_file=0,
            model="",
            chat_history=[],
            max_new_tokens: int = 200,
            temperature: float = 0,
            top_k: int = -1,
            frequency_penalty: int = 0,
            repetition_penalty: int = 1,
            presence_penalty: float = 0,
            fetch_k=100000,
            select_k=4,
            api_version='2023-05-15',
            guardrail={'jailbreak': False, 'moderation': False},
            custom_guardrail=None)` \
  Generate text using LLM endpoint. Note: Some parameter of the endpoint is endpoint-specific. \
  Parameter: 
  - query: a string of your prompt
  - use_file: Whether to take file to context in generation. Only applies to dev_full and dev_quan. Need to `create_embedding` before use.
  - model: a string on the model to use. You can use ` list_models` to check for model available.
  - chat_history: an array of chat history between user and bot. Only applies to dev_full and dev_quan. (Beta)
  - max_new_token: maximum new token to generate. Must be integer.
  - temperature: Float that controls the randomness of the sampling. Lower
        values make the model more deterministic, while higher values make
        the model more random. Zero means greedy sampling.
  - top_k: Integer that controls the number of top tokens to consider.
  - frequency_penalty: Float that penalizes new tokens based on their
        frequency in the generated text so far.
  - repetition_penalty: Float that penalizes new tokens based on whether
        they appear in the prompt and the generated text so far.
  - presence_penalty: Float that penalizes new tokens based on whether they
        appear in the generated text so far
  - fetch_k: Use for document retrival. Include how many element in searching. Only applies when `use_file` is 1
  - select k: Use to select number of document for document retrieval. Only applies when `use_file` is 1
  - api_version: Only applies for azure endpoint
  - guardrail: Whether to use the default jailbreak guardrail and moderation guardrail
  - custom_guardrail: Path to custom guardrail .yaml file. The format can be found in sample.yaml
  
- ` evaluate_parameter(config)`
Carry out grid search for parameter \
Parameter:
    - config: A dict. The dict must contain model and query. Parameter to grid search must be a list. 
        - model: a string of model
        - query: a string of query
        - **other parameter (eg: "temperature":list(np.arange(0,2,0.5))
- `decrypt_hash(encrypted_data)`
Decret the configuration from experiment id.
Parameter:
    - encrypted_data: a string of id


## :fire: Quickstart

```python
from aiaas_falcon import Falcon
model=Falcon(api_name="azure_1",protocol='https',host_name_port='example.com',api_key='API_KEY',api_endpoint='azure',log_key="KEY")
model.list_models()
model.generate_text_full(query="Hello, introduce yourself",model='gpt-35-turbo-0613-vanilla',api_version='2023-05-15')

```



## Conclusion

AIaaS_Falcon library simplifies interactions with the LLM API's, providing a straightforward way to perform various operations such as listing models, creating embeddings, and generating text.

## Authors

- [@Praveengovianalytics](https://github.com/Praveengovianalytics)
- [@zhuofan](https://github.com/zhuofan-16)

## Google Colab

- [Get start with aiaas_falcon](https://colab.research.google.com/drive/1haZ-1fD4htQuNF2zzyrUSTP90KRls1dC?usp=sharing)

## Badges

[![MIT License](https://img.shields.io/badge/License-MIT-green.svg)](https://choosealicense.com/licenses/mit/)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "aiaas-falcon",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.1,<4.0.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/e0/03/a1476698ce328157bbe6ee5a48d4aa0b09c2be24e7a79a172ec8d7541bc1/aiaas_falcon-0.2.2.tar.gz",
    "platform": null,
    "description": "![AIaaS Falcon Logo](img/AIAAS_FALCON.jpg)\n\n# AIaaS Falcon\n\n\n<h4 align=\"center\">\n    <p>\n        <a href=\"#shield-installation\">Installation</a> |\n        <a href=\"#fire-quickstart\">Quickstart</a> |\n    <p>\n</h4>\n\n\n![Documentation Coverage](interrogate_badge.svg)\n\n## Description\n\nAIaaS_Falcon is Generative AI - LLM library interacts with different model api endpoints., allowing operations such as listing models, creating embeddings, and generating text based on certain configurations. AIaaS_Falcon helps to invoking the RAG pipeline in seconds. \n\n## Supported Endpoint Types:\n- Azure OpenAI\n- SingtelGPT\n- Dev_Quantized\n- Dev_Full\n\n## :shield: Installation\n\nEnsure you have the `requests` and `google-api-core` libraries installed:\n\n```bash\npip install aiaas-falcon\n```\n\n\nif you want to install from source\n\n```bash\ngit clone https://github.com/Praveengovianalytics/AIaaS_falcon && cd AIaaS_falcon\npip install -e .\n```\n\n### Methods\n### `Falcon`  Class\n- `__init__ (config)`\nIntialise the Falcon object with endpoint configs. \\\nParameter: \n    - api_key : API Key\n    - api_name: Name for endpoint\n    - api_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)\n    - host_name_port: Host and Port Inforation\n    - use_pil: Activate Personal Identifier Information Limit Protection (Boolean)\n    - protocol: HTTP/ HTTPS\n    - api_type: Subroute if needed\n    - use_pii: Whether current endpoint need Personal Identifier Information Limit Protection\n    - log_key: Auth Key to use the Application\n- `current_active()`\nCheck current endpoint active\n- `add_endpoint(api_name,protocol,host_name_port,api_endpoint,api_key,use_pil=False)`\nAdd a new endpoint. \\\nParameter:\n    - api_key : API Key\n    - api_name: Name for endpoint\n    - api_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)\n    - host_name_port: Host and Port Inforation\n    - use_pii: Activate Personal Identifier Information Limit Protection (Boolean)\n    - protocol: HTTP/ HTTPS\n    - use_pil: Whether current endpoint need Personal Identifier Information Limit Protection\n- `list_endpoint()`\nList out all endpoints in the endpoint management manager\n- `set_endpoint(name)`\nSet target endpoint to active \\\nParameter:\n    - name : Target endpoint's name\n \n- `remove_endpoint(name)`\nDelete endpoint by name \\\nParameter:\n    - name : Target endpoint's name\n\n- `current_pii()`\nCheck current Personal Identifier Information Protection activation status\n\n- `switch_pii()`\nSwitch current Personal Identifier Information Protection activation status\n- `list_models()`\nList out models available\n- `initalise_pii()`\nDownload and intialise PII Protection. \\\nNote: This does not activate PII but initialise dependencies\n\n- `health()`\nCheck health of current endpoint\n\n- `create_embedding(file_path)`\nCreate embeddings by sending files to the API. \\\nParameter:\n    - file_path: Path to file \n\n- `generate_text_full(query=\"\",\n            context=\"\",\n            use_file=0,\n            model=\"\",\n            chat_history=[],\n            max_new_tokens: int = 200,\n            temperature: float = 0,\n            top_k: int = -1,\n            frequency_penalty: int = 0,\n            repetition_penalty: int = 1,\n            presence_penalty: float = 0,\n            fetch_k=100000,\n            select_k=4,\n            api_version='2023-05-15',\n            guardrail={'jailbreak': False, 'moderation': False},\n            custom_guardrail=None)` \\\n  Generate text using LLM endpoint. Note: Some parameter of the endpoint is endpoint-specific. \\\n  Parameter: \n  - query: a string of your prompt\n  - use_file: Whether to take file to context in generation. Only applies to dev_full and dev_quan. Need to `create_embedding` before use.\n  - model: a string on the model to use. You can use ` list_models` to check for model available.\n  - chat_history: an array of chat history between user and bot. Only applies to dev_full and dev_quan. (Beta)\n  - max_new_token: maximum new token to generate. Must be integer.\n  - temperature: Float that controls the randomness of the sampling. Lower\n        values make the model more deterministic, while higher values make\n        the model more random. Zero means greedy sampling.\n  - top_k: Integer that controls the number of top tokens to consider.\n  - frequency_penalty: Float that penalizes new tokens based on their\n        frequency in the generated text so far.\n  - repetition_penalty: Float that penalizes new tokens based on whether\n        they appear in the prompt and the generated text so far.\n  - presence_penalty: Float that penalizes new tokens based on whether they\n        appear in the generated text so far\n  - fetch_k: Use for document retrival. Include how many element in searching. Only applies when `use_file` is 1\n  - select k: Use to select number of document for document retrieval. Only applies when `use_file` is 1\n  - api_version: Only applies for azure endpoint\n  - guardrail: Whether to use the default jailbreak guardrail and moderation guardrail\n  - custom_guardrail: Path to custom guardrail .yaml file. The format can be found in sample.yaml\n  \n- ` evaluate_parameter(config)`\nCarry out grid search for parameter \\\nParameter:\n    - config: A dict. The dict must contain model and query. Parameter to grid search must be a list. \n        - model: a string of model\n        - query: a string of query\n        - **other parameter (eg: \"temperature\":list(np.arange(0,2,0.5))\n- `decrypt_hash(encrypted_data)`\nDecret the configuration from experiment id.\nParameter:\n    - encrypted_data: a string of id\n\n\n## :fire: Quickstart\n\n```python\nfrom aiaas_falcon import Falcon\nmodel=Falcon(api_name=\"azure_1\",protocol='https',host_name_port='example.com',api_key='API_KEY',api_endpoint='azure',log_key=\"KEY\")\nmodel.list_models()\nmodel.generate_text_full(query=\"Hello, introduce yourself\",model='gpt-35-turbo-0613-vanilla',api_version='2023-05-15')\n\n```\n\n\n\n## Conclusion\n\nAIaaS_Falcon library simplifies interactions with the LLM API's, providing a straightforward way to perform various operations such as listing models, creating embeddings, and generating text.\n\n## Authors\n\n- [@Praveengovianalytics](https://github.com/Praveengovianalytics)\n- [@zhuofan](https://github.com/zhuofan-16)\n\n## Google Colab\n\n- [Get start with aiaas_falcon](https://colab.research.google.com/drive/1haZ-1fD4htQuNF2zzyrUSTP90KRls1dC?usp=sharing)\n\n## Badges\n\n[![MIT License](https://img.shields.io/badge/License-MIT-green.svg)](https://choosealicense.com/licenses/mit/)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "This python package help to interact with Generative AI - Large Language Models. It interacts with AIaaS LLM , AIaaS embedding , AIaaS Audio set of APIs to cater the request.",
    "version": "0.2.2",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4650ddc2eb44982c37d3ee21e32da0f89bff76a1214c69668dfc3882dab5f8ee",
                "md5": "3062e1f44241d1becf1e594b7608ec3c",
                "sha256": "2e9a4f2db99b39e99f70f70e1673757f443398da7b1b04139d72b92267c240be"
            },
            "downloads": -1,
            "filename": "aiaas_falcon-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3062e1f44241d1becf1e594b7608ec3c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.1,<4.0.0",
            "size": 6229,
            "upload_time": "2024-01-04T09:39:32",
            "upload_time_iso_8601": "2024-01-04T09:39:32.475846Z",
            "url": "https://files.pythonhosted.org/packages/46/50/ddc2eb44982c37d3ee21e32da0f89bff76a1214c69668dfc3882dab5f8ee/aiaas_falcon-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e003a1476698ce328157bbe6ee5a48d4aa0b09c2be24e7a79a172ec8d7541bc1",
                "md5": "75482c7395ff4711c5a9ae02e81d0dd4",
                "sha256": "5bd74805c884de48d93c8aa4ef0d0831daffac9888a62309c358f3c20988c06b"
            },
            "downloads": -1,
            "filename": "aiaas_falcon-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "75482c7395ff4711c5a9ae02e81d0dd4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.1,<4.0.0",
            "size": 5619,
            "upload_time": "2024-01-04T09:39:33",
            "upload_time_iso_8601": "2024-01-04T09:39:33.773743Z",
            "url": "https://files.pythonhosted.org/packages/e0/03/a1476698ce328157bbe6ee5a48d4aa0b09c2be24e7a79a172ec8d7541bc1/aiaas_falcon-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-04 09:39:33",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "aiaas-falcon"
}
        
Elapsed time: 0.17413s