llama-index-llms-cohere


Namellama-index-llms-cohere JSON
Version 0.3.1 PyPI version JSON
download
home_pageNone
Summaryllama-index llms cohere integration
upload_time2024-10-08 22:25:50
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.8.1
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Llms Integration: Cohere

### Installation

```bash
%pip install llama-index-llms-openai
%pip install llama-index-llms-cohere
!pip install llama-index
```

### Basic usage

```py
# Import Cohere
from llama_index.llms.cohere import Cohere

# Set your API key
api_key = "Your api key"

# Call complete function
resp = Cohere(api_key=api_key).complete("Paul Graham is ")
# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.
print(resp)

# Output
# an English computer scientist, entrepreneur and investor.
# He is best known for his work as a co-founder of the seed accelerator Y Combinator.
# He is also the author of the free startup advice blog "Startups.com".
# Paul Graham is known for his philanthropic efforts.
# Has given away hundreds of millions of dollars to good causes.

# Call chat with a list of messages
from llama_index.core.llms import ChatMessage

messages = [
    ChatMessage(role="user", content="hello there"),
    ChatMessage(
        role="assistant", content="Arrrr, matey! How can I help ye today?"
    ),
    ChatMessage(role="user", content="What is your name"),
]

resp = Cohere(api_key=api_key).chat(
    messages, preamble_override="You are a pirate with a colorful personality"
)
print(resp)

# Output
# assistant: Traditionally, ye refers to gender-nonconforming people of any gender,
# and those who are genderless, whereas matey refers to a friend, commonly used to
# address a fellow pirate. According to pop culture in works like "Pirates of the
# Caribbean", the romantic interest of Jack Sparrow refers to themselves using the
# gender-neutral pronoun "ye".

# Are you interested in learning more about the pirate culture?
```

### Streaming: Using stream_complete endpoint

```py
from llama_index.llms.cohere import Cohere

llm = Cohere(api_key=api_key)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

# Output
# an English computer scientist, essayist, and venture capitalist.
# He is best known for his work as a co-founder of the Y Combinator startup incubator,
# and his essays, which are widely read and influential in the startup community.

# Using stream_chat endpoint
messages = [
    ChatMessage(role="user", content="hello there"),
    ChatMessage(
        role="assistant", content="Arrrr, matey! How can I help ye today?"
    ),
    ChatMessage(role="user", content="What is your name"),
]

resp = llm.stream_chat(
    messages, preamble_override="You are a pirate with a colorful personality"
)
for r in resp:
    print(r.delta, end="")

# Output
# Arrrr, matey! According to etiquette, we are suppose to exchange names first!
# Mine remains a mystery for now.
```

### Configure Model

```py
llm = Cohere(model="command", api_key=api_key)
resp = llm.complete("Paul Graham is ")
# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.
print(resp)

# Output
# an English computer scientist, entrepreneur and investor.
# He is best known for his work as a co-founder of the seed accelerator Y Combinator.
# He is also the co-founder of the online dating platform Match.com.

# Async calls
llm = Cohere(model="command", api_key=api_key)
resp = await llm.acomplete("Paul Graham is ")
# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.
print(resp)

# Output
# an English computer scientist, entrepreneur and investor.
# He is best known for his work as a co-founder of the startup incubator and seed fund
# Y Combinator, and the programming language Lisp. He has also written numerous essays,
# many of which have become highly influential in the software engineering field.

# Streaming async
resp = await llm.astream_complete("Paul Graham is ")
async for delta in resp:
    print(delta.delta, end="")

# Output
# an English computer scientist, essayist, and businessman.
# He is best known for his work as a co-founder of the startup accelerator Y Combinator,
# and his essay "Beating the Averages."
```

### Set API Key at a per-instance level

```py
# If desired, you can have separate LLM instances use separate API keys.
from llama_index.llms.cohere import Cohere

llm_good = Cohere(api_key=api_key)
llm_bad = Cohere(model="command", api_key="BAD_KEY")

resp = llm_good.complete("Paul Graham is ")
print(resp)

resp = llm_bad.complete("Paul Graham is ")
print(resp)
```

### LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/cohere/

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-llms-cohere",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8.1",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/de/49/572c0572afcd4623774cc44152f76deac7b095fed9d0dbd4436a3cf84120/llama_index_llms_cohere-0.3.1.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Llms Integration: Cohere\n\n### Installation\n\n```bash\n%pip install llama-index-llms-openai\n%pip install llama-index-llms-cohere\n!pip install llama-index\n```\n\n### Basic usage\n\n```py\n# Import Cohere\nfrom llama_index.llms.cohere import Cohere\n\n# Set your API key\napi_key = \"Your api key\"\n\n# Call complete function\nresp = Cohere(api_key=api_key).complete(\"Paul Graham is \")\n# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.\nprint(resp)\n\n# Output\n# an English computer scientist, entrepreneur and investor.\n# He is best known for his work as a co-founder of the seed accelerator Y Combinator.\n# He is also the author of the free startup advice blog \"Startups.com\".\n# Paul Graham is known for his philanthropic efforts.\n# Has given away hundreds of millions of dollars to good causes.\n\n# Call chat with a list of messages\nfrom llama_index.core.llms import ChatMessage\n\nmessages = [\n    ChatMessage(role=\"user\", content=\"hello there\"),\n    ChatMessage(\n        role=\"assistant\", content=\"Arrrr, matey! How can I help ye today?\"\n    ),\n    ChatMessage(role=\"user\", content=\"What is your name\"),\n]\n\nresp = Cohere(api_key=api_key).chat(\n    messages, preamble_override=\"You are a pirate with a colorful personality\"\n)\nprint(resp)\n\n# Output\n# assistant: Traditionally, ye refers to gender-nonconforming people of any gender,\n# and those who are genderless, whereas matey refers to a friend, commonly used to\n# address a fellow pirate. According to pop culture in works like \"Pirates of the\n# Caribbean\", the romantic interest of Jack Sparrow refers to themselves using the\n# gender-neutral pronoun \"ye\".\n\n# Are you interested in learning more about the pirate culture?\n```\n\n### Streaming: Using stream_complete endpoint\n\n```py\nfrom llama_index.llms.cohere import Cohere\n\nllm = Cohere(api_key=api_key)\nresp = llm.stream_complete(\"Paul Graham is \")\nfor r in resp:\n    print(r.delta, end=\"\")\n\n# Output\n# an English computer scientist, essayist, and venture capitalist.\n# He is best known for his work as a co-founder of the Y Combinator startup incubator,\n# and his essays, which are widely read and influential in the startup community.\n\n# Using stream_chat endpoint\nmessages = [\n    ChatMessage(role=\"user\", content=\"hello there\"),\n    ChatMessage(\n        role=\"assistant\", content=\"Arrrr, matey! How can I help ye today?\"\n    ),\n    ChatMessage(role=\"user\", content=\"What is your name\"),\n]\n\nresp = llm.stream_chat(\n    messages, preamble_override=\"You are a pirate with a colorful personality\"\n)\nfor r in resp:\n    print(r.delta, end=\"\")\n\n# Output\n# Arrrr, matey! According to etiquette, we are suppose to exchange names first!\n# Mine remains a mystery for now.\n```\n\n### Configure Model\n\n```py\nllm = Cohere(model=\"command\", api_key=api_key)\nresp = llm.complete(\"Paul Graham is \")\n# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.\nprint(resp)\n\n# Output\n# an English computer scientist, entrepreneur and investor.\n# He is best known for his work as a co-founder of the seed accelerator Y Combinator.\n# He is also the co-founder of the online dating platform Match.com.\n\n# Async calls\nllm = Cohere(model=\"command\", api_key=api_key)\nresp = await llm.acomplete(\"Paul Graham is \")\n# Note: Your text contains a trailing whitespace, which has been trimmed to ensure high quality generations.\nprint(resp)\n\n# Output\n# an English computer scientist, entrepreneur and investor.\n# He is best known for his work as a co-founder of the startup incubator and seed fund\n# Y Combinator, and the programming language Lisp. He has also written numerous essays,\n# many of which have become highly influential in the software engineering field.\n\n# Streaming async\nresp = await llm.astream_complete(\"Paul Graham is \")\nasync for delta in resp:\n    print(delta.delta, end=\"\")\n\n# Output\n# an English computer scientist, essayist, and businessman.\n# He is best known for his work as a co-founder of the startup accelerator Y Combinator,\n# and his essay \"Beating the Averages.\"\n```\n\n### Set API Key at a per-instance level\n\n```py\n# If desired, you can have separate LLM instances use separate API keys.\nfrom llama_index.llms.cohere import Cohere\n\nllm_good = Cohere(api_key=api_key)\nllm_bad = Cohere(model=\"command\", api_key=\"BAD_KEY\")\n\nresp = llm_good.complete(\"Paul Graham is \")\nprint(resp)\n\nresp = llm_bad.complete(\"Paul Graham is \")\nprint(resp)\n```\n\n### LLM Implementation example\n\nhttps://docs.llamaindex.ai/en/stable/examples/llm/cohere/\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index llms cohere integration",
    "version": "0.3.1",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "18c894b232eee4c26e724b131ae039a5e8a964f738430fd36bdbbf2c51d14608",
                "md5": "00b327c242e4c33b2537ecbcca5eb0bd",
                "sha256": "5312a5566449b343316bb6e4293e48de4c1e42c5258215436893562f50e753a1"
            },
            "downloads": -1,
            "filename": "llama_index_llms_cohere-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "00b327c242e4c33b2537ecbcca5eb0bd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8.1",
            "size": 11764,
            "upload_time": "2024-10-08T22:25:49",
            "upload_time_iso_8601": "2024-10-08T22:25:49.033370Z",
            "url": "https://files.pythonhosted.org/packages/18/c8/94b232eee4c26e724b131ae039a5e8a964f738430fd36bdbbf2c51d14608/llama_index_llms_cohere-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "de49572c0572afcd4623774cc44152f76deac7b095fed9d0dbd4436a3cf84120",
                "md5": "d31698093b33f57af873881ec46c6a7a",
                "sha256": "907d4bf3fcca1dde18ad347f6dfd8f6ca1518925648b819044f3a8d417243585"
            },
            "downloads": -1,
            "filename": "llama_index_llms_cohere-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d31698093b33f57af873881ec46c6a7a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8.1",
            "size": 12142,
            "upload_time": "2024-10-08T22:25:50",
            "upload_time_iso_8601": "2024-10-08T22:25:50.234218Z",
            "url": "https://files.pythonhosted.org/packages/de/49/572c0572afcd4623774cc44152f76deac7b095fed9d0dbd4436a3cf84120/llama_index_llms_cohere-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-08 22:25:50",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-llms-cohere"
}
        
Elapsed time: 0.36842s