generative-ai-hub-sdk


Namegenerative-ai-hub-sdk JSON
Version 1.2.0 PyPI version JSON
download
home_pagehttps://www.sap.com/
Summarygenerative AI hub SDK
upload_time2024-02-02 14:39:07
maintainer
docs_urlNone
authorSAP SE
requires_python>=3.9
licenseSAP DEVELOPER LICENSE AGREEMENT
keywords sap generative ai hub sdk sap ai core api sap ai core
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # SAP generative AI hub SDK

With this SDK you can leverage the power of generative Models like chatGPT available in SAP's generative AI hub.

<!-- List of available models: #Todo -->

## Installation
    pip install generative-ai-hub-sdk

## Configuration
The configuration from ai-core-sdk is reused:
- `AICORE_CLIENT_ID`: This represents the client ID.
- `AICORE_CLIENT_SECRET`: This stands for the client secret.
- `AICORE_AUTH_URL`: This is the URL used to retrieve a token using the client ID and secret.
- `AICORE_BASE_URL`: This is the URL of the service (with suffix /v2).
- `AICORE_RESOURCE_GROUP`: This represents the resource group that should be used.

We recommend setting these values as environment variables or via config file. The default path for this file
is  `~/.aicore/config.json`

```json
{
  "AICORE_AUTH_URL": "https://* * * .authentication.sap.hana.ondemand.com",
  "AICORE_CLIENT_ID": "* * * ",
  "AICORE_CLIENT_SECRET": "* * * ",
  "AICORE_RESOURCE_GROUP": "* * * ",
  "AICORE_BASE_URL": "https://api.ai.* * *.cfapps.sap.hana.ondemand.com/v2"
}
```

## Usage

### Prerequisite
*Activate* the generative AI hub for your tenant according to the [Generative AI Hub document](https://help.sap.com/doc/7ca8e589556c4596abafe95545dfc212/CLOUD/en-US/553250b6ec764a05be43a7cd8cba0526.pdf).
### OpenAI-like API

#### Completion
Below is an example usage of openai.Completions in generative-ai-hub sdk:

    from gen_ai_hub.proxy.native.openai import completions

    response = completions.create(
      model_name="tiiuae--falcon-40b-instruct",
      prompt="The Answer to the Ultimate Question of Life, the Universe, and Everything is",
      max_tokens=7,
      temperature=0
    )
    print(response)

#### ChatCompletion
Below is an example usage of openai.ChatCompletions:

    from gen_ai_hub.proxy.native.openai import chat

    messages = [ {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
                {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
                {"role": "user", "content": "Do other Azure Cognitive Services support this too?"} ]

    kwargs = dict(model_name='gpt-35-turbo', messages=messages)
    response = chat.completions.create(**kwargs)
    print(response)

#### Embeddings
Below is an example usage of openai.Embeddings:

    from gen_ai_hub.proxy.native.openai import embeddings

    response = embeddings.create(
        input="Every decoding is another encoding.",
        model_name="text-embedding-ada-002"
        encoding_format='base64'
    )
    print(response.data)

### Langchain Api

#### Model Initialization

The `init_llm` and `init_embedding_model` functions allow easy initialization of langchain model interfaces
in a harmonized way by the generative AI hub SDK:

Function: `init_llm`

    from langchain.chains import LLMChain
    from langchain.prompts import PromptTemplate

    from gen_ai_hub.proxy.langchain.init_models import init_llm

    template = """Question: {question}
        Answer: Let's think step by step."""
    prompt = PromptTemplate(template=template, input_variables=['question'])
    question = 'What is a supernova?'

    llm = init_llm('gpt-4', max_tokens=100)
    llm_chain = LLMChain(prompt=prompt, llm=llm)
    response = llm_chain.invoke(question)
    print(response['text'])

Function `init_embedding_model`

    from gen_ai_hub.proxy.langchain.init_models import init_embedding_model

    text = 'Every decoding is another encoding.'
    embeddings = init_embedding_model('text-embedding-ada-002')
    response = embeddings.embed_query(text)
    print(response)

#### Completion Model

    from langchain import PromptTemplate, LLMChain

    from gen_ai_hub.proxy.langchain.openai import OpenAI  # langchain class representing the AICore OpenAI models
    from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client

    proxy_client = get_proxy_client('aicore')
    # non-chat model
    model_name = "tiiuae--falcon-40b-instruct"

    llm = OpenAI(proxy_model_name=model_name, proxy_client=proxy_client)  # standard langchain usage

    template = """Question: {question}

    Answer: Let's think step by step."""

    prompt = PromptTemplate(template=template, input_variables=["question"])
    llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True)

    question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"

    print(llm_chain.predict(question=question))

#### Chat Model

    from langchain.prompts.chat import (
            AIMessagePromptTemplate,
            ChatPromptTemplate,
            HumanMessagePromptTemplate,
            SystemMessagePromptTemplate,
        )

    from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
    from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client

    proxy_client = get_proxy_client('aicore')

    chat_llm = ChatOpenAI(proxy_model_name='gpt-35-turbo', proxy_client=proxy_client)
    template = 'You are a helpful assistant that translates english to pirate.'

    system_message_prompt = SystemMessagePromptTemplate.from_template(template)

    example_human = HumanMessagePromptTemplate.from_template('Hi')
    example_ai = AIMessagePromptTemplate.from_template('Ahoy!')
    human_template = '{text}'

    human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
    chat_prompt = ChatPromptTemplate.from_messages(
        [system_message_prompt, example_human, example_ai, human_message_prompt])

    chain = LLMChain(llm=chat_llm, prompt=chat_prompt)

    response = chain.invoke('I love planking.')
    print(response['text'])

#### Embedding Model

    from gen_ai_hub.proxy.native.openai import OpenAIEmbeddings
    from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client

    proxy_client = get_proxy_client('aicore')

    # can be called without passing proxy_client
    embedding_model = OpenAIEmbeddings(proxy_model_name='text-embedding-ada-002')

    response = embedding_model.embed_query('Every decoding is another encoding.')
    print(response)


            

Raw data

            {
    "_id": null,
    "home_page": "https://www.sap.com/",
    "name": "generative-ai-hub-sdk",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "SAP generative AI hub SDK,SAP AI Core API,SAP AI Core",
    "author": "SAP SE",
    "author_email": "",
    "download_url": "https://pypi.python.org/pypi/generative-ai-hub-sdk",
    "platform": "Windows",
    "description": "# SAP generative AI hub SDK\n\nWith this SDK you can leverage the power of generative Models like chatGPT available in SAP's generative AI hub.\n\n<!-- List of available models: #Todo -->\n\n## Installation\n    pip install generative-ai-hub-sdk\n\n## Configuration\nThe configuration from ai-core-sdk is reused:\n- `AICORE_CLIENT_ID`: This represents the client ID.\n- `AICORE_CLIENT_SECRET`: This stands for the client secret.\n- `AICORE_AUTH_URL`: This is the URL used to retrieve a token using the client ID and secret.\n- `AICORE_BASE_URL`: This is the URL of the service (with suffix /v2).\n- `AICORE_RESOURCE_GROUP`: This represents the resource group that should be used.\n\nWe recommend setting these values as environment variables or via config file. The default path for this file\nis  `~/.aicore/config.json`\n\n```json\n{\n  \"AICORE_AUTH_URL\": \"https://* * * .authentication.sap.hana.ondemand.com\",\n  \"AICORE_CLIENT_ID\": \"* * * \",\n  \"AICORE_CLIENT_SECRET\": \"* * * \",\n  \"AICORE_RESOURCE_GROUP\": \"* * * \",\n  \"AICORE_BASE_URL\": \"https://api.ai.* * *.cfapps.sap.hana.ondemand.com/v2\"\n}\n```\n\n## Usage\n\n### Prerequisite\n*Activate* the generative AI hub for your tenant according to the [Generative AI Hub document](https://help.sap.com/doc/7ca8e589556c4596abafe95545dfc212/CLOUD/en-US/553250b6ec764a05be43a7cd8cba0526.pdf).\n### OpenAI-like API\n\n#### Completion\nBelow is an example usage of openai.Completions in generative-ai-hub sdk:\n\n    from gen_ai_hub.proxy.native.openai import completions\n\n    response = completions.create(\n      model_name=\"tiiuae--falcon-40b-instruct\",\n      prompt=\"The Answer to the Ultimate Question of Life, the Universe, and Everything is\",\n      max_tokens=7,\n      temperature=0\n    )\n    print(response)\n\n#### ChatCompletion\nBelow is an example usage of openai.ChatCompletions:\n\n    from gen_ai_hub.proxy.native.openai import chat\n\n    messages = [ {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n                {\"role\": \"user\", \"content\": \"Does Azure OpenAI support customer managed keys?\"},\n                {\"role\": \"assistant\", \"content\": \"Yes, customer managed keys are supported by Azure OpenAI.\"},\n                {\"role\": \"user\", \"content\": \"Do other Azure Cognitive Services support this too?\"} ]\n\n    kwargs = dict(model_name='gpt-35-turbo', messages=messages)\n    response = chat.completions.create(**kwargs)\n    print(response)\n\n#### Embeddings\nBelow is an example usage of openai.Embeddings:\n\n    from gen_ai_hub.proxy.native.openai import embeddings\n\n    response = embeddings.create(\n        input=\"Every decoding is another encoding.\",\n        model_name=\"text-embedding-ada-002\"\n        encoding_format='base64'\n    )\n    print(response.data)\n\n### Langchain Api\n\n#### Model Initialization\n\nThe `init_llm` and `init_embedding_model` functions allow easy initialization of langchain model interfaces\nin a harmonized way by the generative AI hub SDK:\n\nFunction: `init_llm`\n\n    from langchain.chains import LLMChain\n    from langchain.prompts import PromptTemplate\n\n    from gen_ai_hub.proxy.langchain.init_models import init_llm\n\n    template = \"\"\"Question: {question}\n        Answer: Let's think step by step.\"\"\"\n    prompt = PromptTemplate(template=template, input_variables=['question'])\n    question = 'What is a supernova?'\n\n    llm = init_llm('gpt-4', max_tokens=100)\n    llm_chain = LLMChain(prompt=prompt, llm=llm)\n    response = llm_chain.invoke(question)\n    print(response['text'])\n\nFunction `init_embedding_model`\n\n    from gen_ai_hub.proxy.langchain.init_models import init_embedding_model\n\n    text = 'Every decoding is another encoding.'\n    embeddings = init_embedding_model('text-embedding-ada-002')\n    response = embeddings.embed_query(text)\n    print(response)\n\n#### Completion Model\n\n    from langchain import PromptTemplate, LLMChain\n\n    from gen_ai_hub.proxy.langchain.openai import OpenAI  # langchain class representing the AICore OpenAI models\n    from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client\n\n    proxy_client = get_proxy_client('aicore')\n    # non-chat model\n    model_name = \"tiiuae--falcon-40b-instruct\"\n\n    llm = OpenAI(proxy_model_name=model_name, proxy_client=proxy_client)  # standard langchain usage\n\n    template = \"\"\"Question: {question}\n\n    Answer: Let's think step by step.\"\"\"\n\n    prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n    llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True)\n\n    question = \"What NFL team won the Super Bowl in the year Justin Bieber was born?\"\n\n    print(llm_chain.predict(question=question))\n\n#### Chat Model\n\n    from langchain.prompts.chat import (\n            AIMessagePromptTemplate,\n            ChatPromptTemplate,\n            HumanMessagePromptTemplate,\n            SystemMessagePromptTemplate,\n        )\n\n    from gen_ai_hub.proxy.langchain.openai import ChatOpenAI\n    from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client\n\n    proxy_client = get_proxy_client('aicore')\n\n    chat_llm = ChatOpenAI(proxy_model_name='gpt-35-turbo', proxy_client=proxy_client)\n    template = 'You are a helpful assistant that translates english to pirate.'\n\n    system_message_prompt = SystemMessagePromptTemplate.from_template(template)\n\n    example_human = HumanMessagePromptTemplate.from_template('Hi')\n    example_ai = AIMessagePromptTemplate.from_template('Ahoy!')\n    human_template = '{text}'\n\n    human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)\n    chat_prompt = ChatPromptTemplate.from_messages(\n        [system_message_prompt, example_human, example_ai, human_message_prompt])\n\n    chain = LLMChain(llm=chat_llm, prompt=chat_prompt)\n\n    response = chain.invoke('I love planking.')\n    print(response['text'])\n\n#### Embedding Model\n\n    from gen_ai_hub.proxy.native.openai import OpenAIEmbeddings\n    from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client\n\n    proxy_client = get_proxy_client('aicore')\n\n    # can be called without passing proxy_client\n    embedding_model = OpenAIEmbeddings(proxy_model_name='text-embedding-ada-002')\n\n    response = embedding_model.embed_query('Every decoding is another encoding.')\n    print(response)\n\n",
    "bugtrack_url": null,
    "license": "SAP DEVELOPER LICENSE AGREEMENT",
    "summary": "generative AI hub SDK",
    "version": "1.2.0",
    "project_urls": {
        "Download": "https://pypi.python.org/pypi/generative-ai-hub-sdk",
        "Homepage": "https://www.sap.com/"
    },
    "split_keywords": [
        "sap generative ai hub sdk",
        "sap ai core api",
        "sap ai core"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2dbaf74880e4f2826d62c115c66d5a5da88a6c6eeded616964c090a43a8d4e6e",
                "md5": "adf2d5dff1378904a05207a8b5f3abc5",
                "sha256": "e3d46f5a9a67d81b86f3f2a4206a8be3a60767a0db28d08b730d1e1767641719"
            },
            "downloads": -1,
            "filename": "generative_ai_hub_sdk-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "adf2d5dff1378904a05207a8b5f3abc5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 202269,
            "upload_time": "2024-02-02T14:39:07",
            "upload_time_iso_8601": "2024-02-02T14:39:07.525862Z",
            "url": "https://files.pythonhosted.org/packages/2d/ba/f74880e4f2826d62c115c66d5a5da88a6c6eeded616964c090a43a8d4e6e/generative_ai_hub_sdk-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-02 14:39:07",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "generative-ai-hub-sdk"
}
        
Elapsed time: 0.19135s