llama-index-llms-cohere


Namellama-index-llms-cohere JSON
Version 0.4.0 PyPI version JSON
download
home_pageNone
Summaryllama-index llms cohere integration
upload_time2024-11-18 00:21:25
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Llms Integration: Cohere

### Installation

```bash
%pip install llama-index-llms-openai
%pip install llama-index-llms-cohere
!pip install llama-index
```

### Basic usage

```py
# Import Cohere
from llama_index.llms.cohere import Cohere

# Set your API key
api_key = "Your api key"

# Call complete function
resp = Cohere(api_key=api_key).complete("Paul Graham is ")
# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.
print(resp)

# Output
# an English computer scientist, entrepreneur and investor.
# He is best known for his work as a co-founder of the seed accelerator Y Combinator.
# He is also the author of the free startup advice blog "Startups.com".
# Paul Graham is known for his philanthropic efforts.
# Has given away hundreds of millions of dollars to good causes.

# Call chat with a list of messages
from llama_index.core.llms import ChatMessage

messages = [
    ChatMessage(role="user", content="hello there"),
    ChatMessage(
        role="assistant", content="Arrrr, matey! How can I help ye today?"
    ),
    ChatMessage(role="user", content="What is your name"),
]

resp = Cohere(api_key=api_key).chat(
    messages, preamble_override="You are a pirate with a colorful personality"
)
print(resp)

# Output
# assistant: Traditionally, ye refers to gender-nonconforming people of any gender,
# and those who are genderless, whereas matey refers to a friend, commonly used to
# address a fellow pirate. According to pop culture in works like "Pirates of the
# Caribbean", the romantic interest of Jack Sparrow refers to themselves using the
# gender-neutral pronoun "ye".

# Are you interested in learning more about the pirate culture?
```

### Streaming: Using stream_complete endpoint

```py
from llama_index.llms.cohere import Cohere

llm = Cohere(api_key=api_key)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

# Output
# an English computer scientist, essayist, and venture capitalist.
# He is best known for his work as a co-founder of the Y Combinator startup incubator,
# and his essays, which are widely read and influential in the startup community.

# Using stream_chat endpoint
messages = [
    ChatMessage(role="user", content="hello there"),
    ChatMessage(
        role="assistant", content="Arrrr, matey! How can I help ye today?"
    ),
    ChatMessage(role="user", content="What is your name"),
]

resp = llm.stream_chat(
    messages, preamble_override="You are a pirate with a colorful personality"
)
for r in resp:
    print(r.delta, end="")

# Output
# Arrrr, matey! According to etiquette, we are suppose to exchange names first!
# Mine remains a mystery for now.
```

### Configure Model

```py
llm = Cohere(model="command", api_key=api_key)
resp = llm.complete("Paul Graham is ")
# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.
print(resp)

# Output
# an English computer scientist, entrepreneur and investor.
# He is best known for his work as a co-founder of the seed accelerator Y Combinator.
# He is also the co-founder of the online dating platform Match.com.

# Async calls
llm = Cohere(model="command", api_key=api_key)
resp = await llm.acomplete("Paul Graham is ")
# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.
print(resp)

# Output
# an English computer scientist, entrepreneur and investor.
# He is best known for his work as a co-founder of the startup incubator and seed fund
# Y Combinator, and the programming language Lisp. He has also written numerous essays,
# many of which have become highly influential in the software engineering field.

# Streaming async
resp = await llm.astream_complete("Paul Graham is ")
async for delta in resp:
    print(delta.delta, end="")

# Output
# an English computer scientist, essayist, and businessman.
# He is best known for his work as a co-founder of the startup accelerator Y Combinator,
# and his essay "Beating the Averages."
```

### Set API Key at a per-instance level

```py
# If desired, you can have separate LLM instances use separate API keys.
from llama_index.llms.cohere import Cohere

llm_good = Cohere(api_key=api_key)
llm_bad = Cohere(model="command", api_key="BAD_KEY")

resp = llm_good.complete("Paul Graham is ")
print(resp)

resp = llm_bad.complete("Paul Graham is ")
print(resp)
```

### LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/cohere/

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-llms-cohere",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/b5/44/d3d7aa31b43724445857c347c2e606d90297ffdf39b094eaaf120f4b1e3c/llama_index_llms_cohere-0.4.0.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Llms Integration: Cohere\n\n### Installation\n\n```bash\n%pip install llama-index-llms-openai\n%pip install llama-index-llms-cohere\n!pip install llama-index\n```\n\n### Basic usage\n\n```py\n# Import Cohere\nfrom llama_index.llms.cohere import Cohere\n\n# Set your API key\napi_key = \"Your api key\"\n\n# Call complete function\nresp = Cohere(api_key=api_key).complete(\"Paul Graham is \")\n# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.\nprint(resp)\n\n# Output\n# an English computer scientist, entrepreneur and investor.\n# He is best known for his work as a co-founder of the seed accelerator Y Combinator.\n# He is also the author of the free startup advice blog \"Startups.com\".\n# Paul Graham is known for his philanthropic efforts.\n# Has given away hundreds of millions of dollars to good causes.\n\n# Call chat with a list of messages\nfrom llama_index.core.llms import ChatMessage\n\nmessages = [\n    ChatMessage(role=\"user\", content=\"hello there\"),\n    ChatMessage(\n        role=\"assistant\", content=\"Arrrr, matey! How can I help ye today?\"\n    ),\n    ChatMessage(role=\"user\", content=\"What is your name\"),\n]\n\nresp = Cohere(api_key=api_key).chat(\n    messages, preamble_override=\"You are a pirate with a colorful personality\"\n)\nprint(resp)\n\n# Output\n# assistant: Traditionally, ye refers to gender-nonconforming people of any gender,\n# and those who are genderless, whereas matey refers to a friend, commonly used to\n# address a fellow pirate. According to pop culture in works like \"Pirates of the\n# Caribbean\", the romantic interest of Jack Sparrow refers to themselves using the\n# gender-neutral pronoun \"ye\".\n\n# Are you interested in learning more about the pirate culture?\n```\n\n### Streaming: Using stream_complete endpoint\n\n```py\nfrom llama_index.llms.cohere import Cohere\n\nllm = Cohere(api_key=api_key)\nresp = llm.stream_complete(\"Paul Graham is \")\nfor r in resp:\n    print(r.delta, end=\"\")\n\n# Output\n# an English computer scientist, essayist, and venture capitalist.\n# He is best known for his work as a co-founder of the Y Combinator startup incubator,\n# and his essays, which are widely read and influential in the startup community.\n\n# Using stream_chat endpoint\nmessages = [\n    ChatMessage(role=\"user\", content=\"hello there\"),\n    ChatMessage(\n        role=\"assistant\", content=\"Arrrr, matey! How can I help ye today?\"\n    ),\n    ChatMessage(role=\"user\", content=\"What is your name\"),\n]\n\nresp = llm.stream_chat(\n    messages, preamble_override=\"You are a pirate with a colorful personality\"\n)\nfor r in resp:\n    print(r.delta, end=\"\")\n\n# Output\n# Arrrr, matey! According to etiquette, we are suppose to exchange names first!\n# Mine remains a mystery for now.\n```\n\n### Configure Model\n\n```py\nllm = Cohere(model=\"command\", api_key=api_key)\nresp = llm.complete(\"Paul Graham is \")\n# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.\nprint(resp)\n\n# Output\n# an English computer scientist, entrepreneur and investor.\n# He is best known for his work as a co-founder of the seed accelerator Y Combinator.\n# He is also the co-founder of the online dating platform Match.com.\n\n# Async calls\nllm = Cohere(model=\"command\", api_key=api_key)\nresp = await llm.acomplete(\"Paul Graham is \")\n# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.\nprint(resp)\n\n# Output\n# an English computer scientist, entrepreneur and investor.\n# He is best known for his work as a co-founder of the startup incubator and seed fund\n# Y Combinator, and the programming language Lisp. He has also written numerous essays,\n# many of which have become highly influential in the software engineering field.\n\n# Streaming async\nresp = await llm.astream_complete(\"Paul Graham is \")\nasync for delta in resp:\n    print(delta.delta, end=\"\")\n\n# Output\n# an English computer scientist, essayist, and businessman.\n# He is best known for his work as a co-founder of the startup accelerator Y Combinator,\n# and his essay \"Beating the Averages.\"\n```\n\n### Set API Key at a per-instance level\n\n```py\n# If desired, you can have separate LLM instances use separate API keys.\nfrom llama_index.llms.cohere import Cohere\n\nllm_good = Cohere(api_key=api_key)\nllm_bad = Cohere(model=\"command\", api_key=\"BAD_KEY\")\n\nresp = llm_good.complete(\"Paul Graham is \")\nprint(resp)\n\nresp = llm_bad.complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/cohere/\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index llms cohere integration",
    "version": "0.4.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b2ea2bec8609a40d52e7186167ddee84cfae16ffcb81369ac0d0d3c58fd9e985",
                "md5": "941fccf314fbc363c577ec87e3e28885",
                "sha256": "bf787a9cf0613082c728310fc9cda4e77693f7bd047e7c9395ca4e4fc8c6450b"
            },
            "downloads": -1,
            "filename": "llama_index_llms_cohere-0.4.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "941fccf314fbc363c577ec87e3e28885",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 11786,
            "upload_time": "2024-11-18T00:21:24",
            "upload_time_iso_8601": "2024-11-18T00:21:24.545838Z",
            "url": "https://files.pythonhosted.org/packages/b2/ea/2bec8609a40d52e7186167ddee84cfae16ffcb81369ac0d0d3c58fd9e985/llama_index_llms_cohere-0.4.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b544d3d7aa31b43724445857c347c2e606d90297ffdf39b094eaaf120f4b1e3c",
                "md5": "22f50358e11c77818f3a8968856a9295",
                "sha256": "d85c29e3eac0aa7578b726385580f4478f93f9ab9ebd3472997f3300ff4f1399"
            },
            "downloads": -1,
            "filename": "llama_index_llms_cohere-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "22f50358e11c77818f3a8968856a9295",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 12261,
            "upload_time": "2024-11-18T00:21:25",
            "upload_time_iso_8601": "2024-11-18T00:21:25.416799Z",
            "url": "https://files.pythonhosted.org/packages/b5/44/d3d7aa31b43724445857c347c2e606d90297ffdf39b094eaaf120f4b1e3c/llama_index_llms_cohere-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-18 00:21:25",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-llms-cohere"
}
        
Elapsed time: 0.35065s