aiaas-falcon-light


Nameaiaas-falcon-light JSON
Version 0.2.5 PyPI version JSON
download
home_page
SummaryThis python package help to interact with Generative AI - Large Language Models. It interacts with AIaaS LLM , AIaaS embedding , AIaaS Audio set of APIs to cater the request.
upload_time2024-01-17 07:43:19
maintainer
docs_urlNone
authorYour Name
requires_python>=3.8.1,<4.0.0
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![AIaaS Falcon Logo](img/AIAAS_FALCON.jpg)

# AIaaS Falcon-Light


<h4 align="center">
    <p>
        <a href="#shield-installation">Installation</a> |
        <a href="#fire-quickstart">Quickstart</a> |
    <p>
</h4>


![Documentation Coverage](interrogate_badge.svg)

## Description

AIaaS_Falcon_Light is Generative AI - Logical & logging framework support AIaaS Falcon library

## :shield: Installation

Ensure you have the `requests` and `google-api-core` libraries installed:

```bash
pip install aiaas-falcon-light
```


if you want to install from source

```bash
git clone https://github.com/Praveengovianalytics/falcon_light && cd falcon_light
pip install -e .
```

### Methods
### `Light`  Class
- `__init__ (config)`
Intialise the Falcon object with endpoint configs. \
Parameter: 
     - config: A object consisting parameter:
        - api_key : API Key
        - api_name: Name for endpoint
        - api_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)
        - url: url of endpoint (eg: http://localhost:8443/)
        - log_id: ID of log (Integer Number)
        - use_pii: Activate Personal Identifier Information Limit Protection (Boolean)
        - headers: header JSON for endpoint
        - log_key: Auth Key to use the Application


- `current_pii()`
Check current Personal Identifier Information Protection activation status

- `switch_pii()`
Switch current Personal Identifier Information Protection activation status
- `list_models()`
List out models available
- `initalise_pii()`
Download and intialise PII Protection. \
Note: This does not activate PII but initialise dependencies

- `health()`
Check health of current endpoint

- `create_embedding(file_path)`
Create embeddings by sending files to the API. \
Parameter:
    - file_path: Path to file 

- `generate_text(query="",
            context="",
            use_file=0,
            model="",
            chat_history=[],
            max_new_tokens: int = 200,
            temperature: float = 0,
            top_k: int = -1,
            frequency_penalty: int = 0,
            repetition_penalty: int = 1,
            presence_penalty: float = 0,
            fetch_k=100000,
            select_k=4,
            api_version='2023-05-15',
            guardrail={'jailbreak': False, 'moderation': False},
            custom_guardrail=None)` \
  Generate text using LLM endpoint. Note: Some parameter of the endpoint is endpoint-specific. \
  Parameter: 
  - query: a string of your prompt
  - use_file: Whether to take file to context in generation. Only applies to dev_full and dev_quan. Need to `create_embedding` before use.
  - model: a string on the model to use. You can use ` list_models` to check for model available.
  - chat_history: an array of chat history between user and bot. Only applies to dev_full and dev_quan. (Beta)
  - max_new_token: maximum new token to generate. Must be integer.
  - temperature: Float that controls the randomness of the sampling. Lower
        values make the model more deterministic, while higher values make
        the model more random. Zero means greedy sampling.
  - top_k: Integer that controls the number of top tokens to consider.
  - frequency_penalty: Float that penalizes new tokens based on their
        frequency in the generated text so far.
  - repetition_penalty: Float that penalizes new tokens based on whether
        they appear in the prompt and the generated text so far.
  - presence_penalty: Float that penalizes new tokens based on whether they
        appear in the generated text so far
  - fetch_k: Use for document retrival. Include how many element in searching. Only applies when `use_file` is 1
  - select k: Use to select number of document for document retrieval. Only applies when `use_file` is 1
  - api_version: Only applies for azure endpoint
  - guardrail: Whether to use the default jailbreak guardrail and moderation guardrail
  - custom_guardrail: Path to custom guardrail .yaml file. The format can be found in sample.yaml
  
- ` evaluate_parameter(config)`
Carry out grid search for parameter \
Parameter:
    - config: A dict. The dict must contain model and query. Parameter to grid search must be a list. 
        - model: a string of model
        - query: a string of query
        - **other parameter (eg: "temperature":list(np.arange(0,2,0.5))
- `decrypt_hash(encrypted_data)`
Decret the configuration from experiment id.
Parameter:
    - encrypted_data: a string of id

## :fire: Quickstart

```
from aiaas_falcon import Falcon
model=Falcon(api_name="azure_1",protocol='https',host_name_port='example.com',api_key='API_KEY',api_endpoint='azure',log_key="KEY")
model.list_models()
model.generate_text_full(query="Hello, introduce yourself",model='gpt-35-turbo-0613-vanilla',api_version='2023-05-15')
```

## Conclusion

AIaaS_Falcon_Light library simplifies interactions with the AIaaS Falcon, providing a straightforward way to perform various operations such as fact-checking and logging.

## Authors

- [@Praveengovianalytics](https://github.com/Praveengovianalytics)
- [@zhuofan](https://github.com/zhuofan-16)

## Google Colab

- [Get start with aiaas_falcon](https://colab.research.google.com/drive/1haZ-1fD4htQuNF2zzyrUSTP90KRls1dC?usp=sharing)

## Badges

[![MIT License](https://img.shields.io/badge/License-MIT-green.svg)](https://choosealicense.com/licenses/mit/)

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "aiaas-falcon-light",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.1,<4.0.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/15/f3/f6b959ff0dff3d7d63b48edf76d6d7976ca906bd9abd57a588574060e82f/aiaas_falcon_light-0.2.5.tar.gz",
    "platform": null,
    "description": "![AIaaS Falcon Logo](img/AIAAS_FALCON.jpg)\n\n# AIaaS Falcon-Light\n\n\n<h4 align=\"center\">\n    <p>\n        <a href=\"#shield-installation\">Installation</a> |\n        <a href=\"#fire-quickstart\">Quickstart</a> |\n    <p>\n</h4>\n\n\n![Documentation Coverage](interrogate_badge.svg)\n\n## Description\n\nAIaaS_Falcon_Light is Generative AI - Logical & logging framework support AIaaS Falcon library\n\n## :shield: Installation\n\nEnsure you have the `requests` and `google-api-core` libraries installed:\n\n```bash\npip install aiaas-falcon-light\n```\n\n\nif you want to install from source\n\n```bash\ngit clone https://github.com/Praveengovianalytics/falcon_light && cd falcon_light\npip install -e .\n```\n\n### Methods\n### `Light`  Class\n- `__init__ (config)`\nIntialise the Falcon object with endpoint configs. \\\nParameter: \n     - config: A object consisting parameter:\n        - api_key : API Key\n        - api_name: Name for endpoint\n        - api_endpoint: Type of endpoint ( can be azure, dev_quan, dev_full, prod)\n        - url: url of endpoint (eg: http://localhost:8443/)\n        - log_id: ID of log (Integer Number)\n        - use_pii: Activate Personal Identifier Information Limit Protection (Boolean)\n        - headers: header JSON for endpoint\n        - log_key: Auth Key to use the Application\n\n\n- `current_pii()`\nCheck current Personal Identifier Information Protection activation status\n\n- `switch_pii()`\nSwitch current Personal Identifier Information Protection activation status\n- `list_models()`\nList out models available\n- `initalise_pii()`\nDownload and intialise PII Protection. \\\nNote: This does not activate PII but initialise dependencies\n\n- `health()`\nCheck health of current endpoint\n\n- `create_embedding(file_path)`\nCreate embeddings by sending files to the API. \\\nParameter:\n    - file_path: Path to file \n\n- `generate_text(query=\"\",\n            context=\"\",\n            use_file=0,\n            model=\"\",\n            chat_history=[],\n            max_new_tokens: int = 200,\n            temperature: float = 0,\n            top_k: int = -1,\n            frequency_penalty: int = 0,\n            repetition_penalty: int = 1,\n            presence_penalty: float = 0,\n            fetch_k=100000,\n            select_k=4,\n            api_version='2023-05-15',\n            guardrail={'jailbreak': False, 'moderation': False},\n            custom_guardrail=None)` \\\n  Generate text using LLM endpoint. Note: Some parameter of the endpoint is endpoint-specific. \\\n  Parameter: \n  - query: a string of your prompt\n  - use_file: Whether to take file to context in generation. Only applies to dev_full and dev_quan. Need to `create_embedding` before use.\n  - model: a string on the model to use. You can use ` list_models` to check for model available.\n  - chat_history: an array of chat history between user and bot. Only applies to dev_full and dev_quan. (Beta)\n  - max_new_token: maximum new token to generate. Must be integer.\n  - temperature: Float that controls the randomness of the sampling. Lower\n        values make the model more deterministic, while higher values make\n        the model more random. Zero means greedy sampling.\n  - top_k: Integer that controls the number of top tokens to consider.\n  - frequency_penalty: Float that penalizes new tokens based on their\n        frequency in the generated text so far.\n  - repetition_penalty: Float that penalizes new tokens based on whether\n        they appear in the prompt and the generated text so far.\n  - presence_penalty: Float that penalizes new tokens based on whether they\n        appear in the generated text so far\n  - fetch_k: Use for document retrival. Include how many element in searching. Only applies when `use_file` is 1\n  - select k: Use to select number of document for document retrieval. Only applies when `use_file` is 1\n  - api_version: Only applies for azure endpoint\n  - guardrail: Whether to use the default jailbreak guardrail and moderation guardrail\n  - custom_guardrail: Path to custom guardrail .yaml file. The format can be found in sample.yaml\n  \n- ` evaluate_parameter(config)`\nCarry out grid search for parameter \\\nParameter:\n    - config: A dict. The dict must contain model and query. Parameter to grid search must be a list. \n        - model: a string of model\n        - query: a string of query\n        - **other parameter (eg: \"temperature\":list(np.arange(0,2,0.5))\n- `decrypt_hash(encrypted_data)`\nDecret the configuration from experiment id.\nParameter:\n    - encrypted_data: a string of id\n\n## :fire: Quickstart\n\n```\nfrom aiaas_falcon import Falcon\nmodel=Falcon(api_name=\"azure_1\",protocol='https',host_name_port='example.com',api_key='API_KEY',api_endpoint='azure',log_key=\"KEY\")\nmodel.list_models()\nmodel.generate_text_full(query=\"Hello, introduce yourself\",model='gpt-35-turbo-0613-vanilla',api_version='2023-05-15')\n```\n\n## Conclusion\n\nAIaaS_Falcon_Light library simplifies interactions with the AIaaS Falcon, providing a straightforward way to perform various operations such as fact-checking and logging.\n\n## Authors\n\n- [@Praveengovianalytics](https://github.com/Praveengovianalytics)\n- [@zhuofan](https://github.com/zhuofan-16)\n\n## Google Colab\n\n- [Get start with aiaas_falcon](https://colab.research.google.com/drive/1haZ-1fD4htQuNF2zzyrUSTP90KRls1dC?usp=sharing)\n\n## Badges\n\n[![MIT License](https://img.shields.io/badge/License-MIT-green.svg)](https://choosealicense.com/licenses/mit/)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "This python package help to interact with Generative AI - Large Language Models. It interacts with AIaaS LLM , AIaaS embedding , AIaaS Audio set of APIs to cater the request.",
    "version": "0.2.5",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3b2e99037cbc68ef1373a2830bc610dc464f7d01e9f008cd8641296ffe2d40d5",
                "md5": "a006020b0e25aa004c8082bd984f43e9",
                "sha256": "c8214d2bd20af4cc1f193b985a00438766955db4a1a1fbfbafa448d938d939cb"
            },
            "downloads": -1,
            "filename": "aiaas_falcon_light-0.2.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a006020b0e25aa004c8082bd984f43e9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.1,<4.0.0",
            "size": 10525,
            "upload_time": "2024-01-17T07:43:18",
            "upload_time_iso_8601": "2024-01-17T07:43:18.017210Z",
            "url": "https://files.pythonhosted.org/packages/3b/2e/99037cbc68ef1373a2830bc610dc464f7d01e9f008cd8641296ffe2d40d5/aiaas_falcon_light-0.2.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "15f3f6b959ff0dff3d7d63b48edf76d6d7976ca906bd9abd57a588574060e82f",
                "md5": "1f79aa4a30f20674af8859e3267484ca",
                "sha256": "3e589d2b3b92ad3fcb82d21d5cd64369e8094bf69f5117770191256ad66ec13b"
            },
            "downloads": -1,
            "filename": "aiaas_falcon_light-0.2.5.tar.gz",
            "has_sig": false,
            "md5_digest": "1f79aa4a30f20674af8859e3267484ca",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.1,<4.0.0",
            "size": 10890,
            "upload_time": "2024-01-17T07:43:19",
            "upload_time_iso_8601": "2024-01-17T07:43:19.958142Z",
            "url": "https://files.pythonhosted.org/packages/15/f3/f6b959ff0dff3d7d63b48edf76d6d7976ca906bd9abd57a588574060e82f/aiaas_falcon_light-0.2.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-17 07:43:19",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "aiaas-falcon-light"
}
        
Elapsed time: 1.00027s