langchain-nvidia-ai-endpoints


Namelangchain-nvidia-ai-endpoints JSON
Version 0.3.7 PyPI version JSON
download
home_pagehttps://github.com/langchain-ai/langchain-nvidia
SummaryAn integration package connecting NVIDIA AI Endpoints and LangChain
upload_time2024-12-16 22:42:11
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # NVIDIA NIM Microservices

The `langchain-nvidia-ai-endpoints` package contains LangChain integrations for chat models and embeddings powered by [NVIDIA AI Foundation Models](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), and hosted on [NVIDIA API Catalog.](https://build.nvidia.com/)

NVIDIA AI Foundation models are community and NVIDIA-built models and are NVIDIA-optimized to deliver the best performance on NVIDIA accelerated infrastructure.  Using the API, you can query live endpoints available on the NVIDIA API Catalog to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster using NVIDIA NIM™ microservices which is part of NVIDIA AI Enterprise.

Models can be exported from NVIDIA’s API catalog with NVIDIA NIM, which is included with the NVIDIA AI Enterprise license, and run them on-premises, giving Enterprises ownership of their customizations and full control of their IP and AI application. NIM microservices are packaged as container images on a per model/model family basis and are distributed as NGC container images through the NVIDIA NGC Catalog. At their core, NIM microservices are containers that provide interactive APIs for running inference on an AI Model. 

Below is an example on how to use some common functionality surrounding text-generative and embedding models.

## Installation

```python
%pip install -U --quiet langchain-nvidia-ai-endpoints
```

## Setup

**To get started:**
1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.
2. Click on your model of choice.
3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`.
4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.

```python
import getpass
import os

if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
    nvidia_api_key = getpass.getpass("Enter your NVIDIA API key: ")
    assert nvidia_api_key.startswith("nvapi-"), f"{nvidia_api_key[:5]}... is not a valid key"
    os.environ["NVIDIA_API_KEY"] = nvidia_api_key
```

## Working with NVIDIA API Catalog
```python
## Core LC Chat Interface
from langchain_nvidia_ai_endpoints import ChatNVIDIA

llm = ChatNVIDIA(model="meta/llama3-70b-instruct", max_tokens=419)
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)
```

## Working with NVIDIA NIM Microservices
When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.

[Learn more about NIM microservices](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)

```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank

# connect to an chat NIM running at localhost:8000, specifying a specific model
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta-llama3-8b-instruct")

# connect to an embedding NIM running at localhost:8080
embedder = NVIDIAEmbeddings(base_url="http://localhost:8080/v1")

# connect to a reranking NIM running at localhost:2016
ranker = NVIDIARerank(base_url="http://localhost:2016/v1")
```

## Stream, Batch, and Async

These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.

```python
print(llm.batch(["What's 2*3?", "What's 2*6?"]))
# Or via the async API
# await llm.abatch(["What's 2*3?", "What's 2*6?"])
```

```python
for chunk in llm.stream("How far can a seagull fly in one day?"):
    # Show the token separations
    print(chunk.content, end="|")
```

```python
async for chunk in llm.astream("How long does it take for monarch butterflies to migrate?"):
    print(chunk.content, end="|")
```

## Supported models

Querying `available_models` will still give you all of the other models offered by your API credentials.

```python
[model.id for model in llm.available_models if model.model_type]

#[
# ...
# 'databricks/dbrx-instruct',
# 'google/codegemma-7b',
# 'google/gemma-2b',
# 'google/gemma-7b',
# 'google/recurrentgemma-2b',
# 'meta/codellama-70b',
# 'meta/llama2-70b',
# 'meta/llama3-70b-instruct',
# 'meta/llama3-8b-instruct',
# 'microsoft/phi-3-mini-128k-instruct',
# 'mistralai/mistral-7b-instruct-v0.2',
# 'mistralai/mistral-large',
# 'mistralai/mixtral-8x22b-instruct-v0.1',
# 'mistralai/mixtral-8x7b-instruct-v0.1',
# 'snowflake/arctic',
# ...
#]
```

## Model types

All of these models above are supported and can be accessed via `ChatNVIDIA`.

Some model types support unique prompting techniques and chat messages. We will review a few important ones below.

**To find out more about a specific model, please navigate to the NVIDIA NIM section of ai.nvidia.com [as linked here](https://docs.api.nvidia.com/nim/).**

### General Chat

Models such as `meta/llama3-8b-instruct` and `mistralai/mixtral-8x22b-instruct-v0.1` are good all-around models that you can use for with any LangChain chat messages. Example below.

```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful AI assistant named Fred."),
        ("user", "{input}")
    ]
)
chain = (
    prompt
    | ChatNVIDIA(model="meta/llama3-8b-instruct")
    | StrOutputParser()
)

for txt in chain.stream({"input": "What's your name?"}):
    print(txt, end="")
```

### Code Generation

These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is `meta/codellama-70b` and `google/codegemma-7b`.

```python
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are an expert coding AI. Respond only in valid python; no narration whatsoever."),
        ("user", "{input}")
    ]
)
chain = (
    prompt
    | ChatNVIDIA(model="meta/codellama-70b", max_tokens=419)
    | StrOutputParser()
)

for txt in chain.stream({"input": "How do I solve this fizz buzz problem?"}):
    print(txt, end="")
```

## Multimodal

NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over.

An example model supporting multimodal inputs is `nvidia/neva-22b`.

These models accept LangChain's standard image formats. Below are examples.

```python
import requests

image_url = "https://picsum.photos/seed/kitten/300/200"
image_content = requests.get(image_url).content
```

Initialize the model like so:

```python
from langchain_nvidia_ai_endpoints import ChatNVIDIA

llm = ChatNVIDIA(model="nvidia/neva-22b")
```

#### Passing an image as a URL

```python
from langchain_core.messages import HumanMessage

llm.invoke(
    [
        HumanMessage(content=[
            {"type": "text", "text": "Describe this image:"},
            {"type": "image_url", "image_url": {"url": image_url}},
        ])
    ])
```

#### Passing an image as a base64 encoded string

```python
import base64
b64_string = base64.b64encode(image_content).decode('utf-8')
llm.invoke(
    [
        HumanMessage(content=[
            {"type": "text", "text": "Describe this image:"},
            {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{b64_string}"}},
        ])
    ])
```

#### Directly within the string

The NVIDIA API uniquely accepts images as base64 images inlined within <img> HTML tags. While this isn't interoperable with other LLMs, you can directly prompt the model accordingly.

```python
base64_with_mime_type = f"data:image/png;base64,{b64_string}"
llm.invoke(
    f'What\'s in this image?\n<img src="{base64_with_mime_type}" />'
)
```

## Completions

You can also work with models that support the Completions API. These models accept a `prompt` instead of `messages`.

```python
completions_llm = NVIDIA().bind(max_tokens=512)
[model.id for model in completions_llm.get_available_models()]

# [
#   ...
#   'bigcode/starcoder2-7b',
#   'bigcode/starcoder2-15b',
#   ...
# ]
```

```python
prompt = "# Function that does quicksort written in Rust without comments:"
for chunk in completions_llm.stream(prompt):
    print(chunk, end="", flush=True)
```


## Embeddings

You can also connect to embeddings models through this package. Below is an example:

```python
from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings

embedder = NVIDIAEmbeddings(model="NV-Embed-QA")
embedder.embed_query("What's the temperature today?")
embedder.embed_documents([
    "The temperature is 42 degrees.",
    "Class is dismissed at 9 PM."
])
```


## Ranking

You can connect to ranking models. Below is an example:

```python
from langchain_nvidia_ai_endpoints import NVIDIARerank
from langchain_core.documents import Document

query = "What is the GPU memory bandwidth of H100 SXM?"
passages = [
    "The Hopper GPU is paired with the Grace CPU using NVIDIA's ultra-fast chip-to-chip interconnect, delivering 900GB/s of bandwidth, 7X faster than PCIe Gen5. This innovative design will deliver up to 30X higher aggregate system memory bandwidth to the GPU compared to today's fastest servers and up to 10X higher performance for applications running terabytes of data.",
    "A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. The A100 80GB debuts the world's fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.",
    "Accelerated servers with H100 deliver the compute power—along with 3 terabytes per second (TB/s) of memory bandwidth per GPU and scalability with NVLink and NVSwitch™.",
]

client = NVIDIARerank(model="nvidia/llama-3.2-nv-rerankqa-1b-v1")

response = client.compress_documents(
  query=query,
  documents=[Document(page_content=passage) for passage in passages]
)

print(f"Most relevant: {response[0].page_content}\nLeast relevant: {response[-1].page_content}")
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/langchain-ai/langchain-nvidia",
    "name": "langchain-nvidia-ai-endpoints",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/8b/81/938f48dcc3a3779ac74ea7663886e33babf6329333e738bdd09a7a083183/langchain_nvidia_ai_endpoints-0.3.7.tar.gz",
    "platform": null,
    "description": "# NVIDIA NIM Microservices\n\nThe `langchain-nvidia-ai-endpoints` package contains LangChain integrations for chat models and embeddings powered by [NVIDIA AI Foundation Models](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), and hosted on [NVIDIA API Catalog.](https://build.nvidia.com/)\n\nNVIDIA AI Foundation models\u00a0are community and NVIDIA-built models and are NVIDIA-optimized to deliver the best performance on NVIDIA accelerated infrastructure.\u00a0 Using the API, you can query live endpoints available on the\u00a0NVIDIA API Catalog\u00a0to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster using NVIDIA NIM\u2122 microservices which is part of NVIDIA AI Enterprise.\n\nModels can be exported from NVIDIA\u2019s API catalog with NVIDIA NIM, which is included with the NVIDIA AI Enterprise license, and run them on-premises, giving Enterprises ownership of their customizations and full control of their IP and AI application. NIM microservices are packaged as container images on a per model/model family basis and are distributed as NGC container images through the NVIDIA NGC Catalog. At their core, NIM microservices are containers that provide interactive APIs for running inference on an AI Model.\u00a0\n\nBelow is an example on how to use some common functionality surrounding text-generative and embedding models.\n\n## Installation\n\n```python\n%pip install -U --quiet langchain-nvidia-ai-endpoints\n```\n\n## Setup\n\n**To get started:**\n1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models.\n2. Click on your model of choice.\n3. Under Input select the Python tab, and click `Get API Key`. Then click `Generate Key`.\n4. Copy and save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.\n\n```python\nimport getpass\nimport os\n\nif not os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n    nvidia_api_key = getpass.getpass(\"Enter your NVIDIA API key: \")\n    assert nvidia_api_key.startswith(\"nvapi-\"), f\"{nvidia_api_key[:5]}... is not a valid key\"\n    os.environ[\"NVIDIA_API_KEY\"] = nvidia_api_key\n```\n\n## Working with NVIDIA API Catalog\n```python\n## Core LC Chat Interface\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\n\nllm = ChatNVIDIA(model=\"meta/llama3-70b-instruct\", max_tokens=419)\nresult = llm.invoke(\"Write a ballad about LangChain.\")\nprint(result.content)\n```\n\n## Working with NVIDIA NIM Microservices\nWhen ready to deploy, you can self-host models with NVIDIA NIM\u2014which is included with the NVIDIA AI Enterprise software license\u2014and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.\n\n[Learn more about NIM microservices](https://developer.nvidia.com/blog/nvidia-nim-offers-optimized-inference-microservices-for-deploying-ai-models-at-scale/)\n\n```python\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank\n\n# connect to an chat NIM running at localhost:8000, specifying a specific model\nllm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta-llama3-8b-instruct\")\n\n# connect to an embedding NIM running at localhost:8080\nembedder = NVIDIAEmbeddings(base_url=\"http://localhost:8080/v1\")\n\n# connect to a reranking NIM running at localhost:2016\nranker = NVIDIARerank(base_url=\"http://localhost:2016/v1\")\n```\n\n## Stream, Batch, and Async\n\nThese models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.\n\n```python\nprint(llm.batch([\"What's 2*3?\", \"What's 2*6?\"]))\n# Or via the async API\n# await llm.abatch([\"What's 2*3?\", \"What's 2*6?\"])\n```\n\n```python\nfor chunk in llm.stream(\"How far can a seagull fly in one day?\"):\n    # Show the token separations\n    print(chunk.content, end=\"|\")\n```\n\n```python\nasync for chunk in llm.astream(\"How long does it take for monarch butterflies to migrate?\"):\n    print(chunk.content, end=\"|\")\n```\n\n## Supported models\n\nQuerying `available_models` will still give you all of the other models offered by your API credentials.\n\n```python\n[model.id for model in llm.available_models if model.model_type]\n\n#[\n# ...\n# 'databricks/dbrx-instruct',\n# 'google/codegemma-7b',\n# 'google/gemma-2b',\n# 'google/gemma-7b',\n# 'google/recurrentgemma-2b',\n# 'meta/codellama-70b',\n# 'meta/llama2-70b',\n# 'meta/llama3-70b-instruct',\n# 'meta/llama3-8b-instruct',\n# 'microsoft/phi-3-mini-128k-instruct',\n# 'mistralai/mistral-7b-instruct-v0.2',\n# 'mistralai/mistral-large',\n# 'mistralai/mixtral-8x22b-instruct-v0.1',\n# 'mistralai/mixtral-8x7b-instruct-v0.1',\n# 'snowflake/arctic',\n# ...\n#]\n```\n\n## Model types\n\nAll of these models above are supported and can be accessed via `ChatNVIDIA`.\n\nSome model types support unique prompting techniques and chat messages. We will review a few important ones below.\n\n**To find out more about a specific model, please navigate to the NVIDIA NIM section of ai.nvidia.com [as linked here](https://docs.api.nvidia.com/nim/).**\n\n### General Chat\n\nModels such as `meta/llama3-8b-instruct` and `mistralai/mixtral-8x22b-instruct-v0.1` are good all-around models that you can use for with any LangChain chat messages. Example below.\n\n```python\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\n\nprompt = ChatPromptTemplate.from_messages(\n    [\n        (\"system\", \"You are a helpful AI assistant named Fred.\"),\n        (\"user\", \"{input}\")\n    ]\n)\nchain = (\n    prompt\n    | ChatNVIDIA(model=\"meta/llama3-8b-instruct\")\n    | StrOutputParser()\n)\n\nfor txt in chain.stream({\"input\": \"What's your name?\"}):\n    print(txt, end=\"\")\n```\n\n### Code Generation\n\nThese models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is `meta/codellama-70b` and `google/codegemma-7b`.\n\n```python\nprompt = ChatPromptTemplate.from_messages(\n    [\n        (\"system\", \"You are an expert coding AI. Respond only in valid python; no narration whatsoever.\"),\n        (\"user\", \"{input}\")\n    ]\n)\nchain = (\n    prompt\n    | ChatNVIDIA(model=\"meta/codellama-70b\", max_tokens=419)\n    | StrOutputParser()\n)\n\nfor txt in chain.stream({\"input\": \"How do I solve this fizz buzz problem?\"}):\n    print(txt, end=\"\")\n```\n\n## Multimodal\n\nNVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over.\n\nAn example model supporting multimodal inputs is `nvidia/neva-22b`.\n\nThese models accept LangChain's standard image formats. Below are examples.\n\n```python\nimport requests\n\nimage_url = \"https://picsum.photos/seed/kitten/300/200\"\nimage_content = requests.get(image_url).content\n```\n\nInitialize the model like so:\n\n```python\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\n\nllm = ChatNVIDIA(model=\"nvidia/neva-22b\")\n```\n\n#### Passing an image as a URL\n\n```python\nfrom langchain_core.messages import HumanMessage\n\nllm.invoke(\n    [\n        HumanMessage(content=[\n            {\"type\": \"text\", \"text\": \"Describe this image:\"},\n            {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n        ])\n    ])\n```\n\n#### Passing an image as a base64 encoded string\n\n```python\nimport base64\nb64_string = base64.b64encode(image_content).decode('utf-8')\nllm.invoke(\n    [\n        HumanMessage(content=[\n            {\"type\": \"text\", \"text\": \"Describe this image:\"},\n            {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/png;base64,{b64_string}\"}},\n        ])\n    ])\n```\n\n#### Directly within the string\n\nThe NVIDIA API uniquely accepts images as base64 images inlined within <img> HTML tags. While this isn't interoperable with other LLMs, you can directly prompt the model accordingly.\n\n```python\nbase64_with_mime_type = f\"data:image/png;base64,{b64_string}\"\nllm.invoke(\n    f'What\\'s in this image?\\n<img src=\"{base64_with_mime_type}\" />'\n)\n```\n\n## Completions\n\nYou can also work with models that support the Completions API. These models accept a `prompt` instead of `messages`.\n\n```python\ncompletions_llm = NVIDIA().bind(max_tokens=512)\n[model.id for model in completions_llm.get_available_models()]\n\n# [\n#   ...\n#   'bigcode/starcoder2-7b',\n#   'bigcode/starcoder2-15b',\n#   ...\n# ]\n```\n\n```python\nprompt = \"# Function that does quicksort written in Rust without comments:\"\nfor chunk in completions_llm.stream(prompt):\n    print(chunk, end=\"\", flush=True)\n```\n\n\n## Embeddings\n\nYou can also connect to embeddings models through this package. Below is an example:\n\n```python\nfrom langchain_nvidia_ai_endpoints import NVIDIAEmbeddings\n\nembedder = NVIDIAEmbeddings(model=\"NV-Embed-QA\")\nembedder.embed_query(\"What's the temperature today?\")\nembedder.embed_documents([\n    \"The temperature is 42 degrees.\",\n    \"Class is dismissed at 9 PM.\"\n])\n```\n\n\n## Ranking\n\nYou can connect to ranking models. Below is an example:\n\n```python\nfrom langchain_nvidia_ai_endpoints import NVIDIARerank\nfrom langchain_core.documents import Document\n\nquery = \"What is the GPU memory bandwidth of H100 SXM?\"\npassages = [\n    \"The Hopper GPU is paired with the Grace CPU using NVIDIA's ultra-fast chip-to-chip interconnect, delivering 900GB/s of bandwidth, 7X faster than PCIe Gen5. This innovative design will deliver up to 30X higher aggregate system memory bandwidth to the GPU compared to today's fastest servers and up to 10X higher performance for applications running terabytes of data.\",\n    \"A100 provides up to 20X higher performance over the prior generation and can be partitioned into seven GPU instances to dynamically adjust to shifting demands. The A100 80GB debuts the world's fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.\",\n    \"Accelerated servers with H100 deliver the compute power\u2014along with 3 terabytes per second (TB/s) of memory bandwidth per GPU and scalability with NVLink and NVSwitch\u2122.\",\n]\n\nclient = NVIDIARerank(model=\"nvidia/llama-3.2-nv-rerankqa-1b-v1\")\n\nresponse = client.compress_documents(\n  query=query,\n  documents=[Document(page_content=passage) for passage in passages]\n)\n\nprint(f\"Most relevant: {response[0].page_content}\\nLeast relevant: {response[-1].page_content}\")\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An integration package connecting NVIDIA AI Endpoints and LangChain",
    "version": "0.3.7",
    "project_urls": {
        "Homepage": "https://github.com/langchain-ai/langchain-nvidia",
        "Repository": "https://github.com/langchain-ai/langchain-nvidia",
        "Source Code": "https://github.com/langchain-ai/langchain-nvidia/tree/main/libs/ai-endpoints"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a76c27081cb79bd44be05206cb12dfb3e800c8fc5b682a38d71cba920e6df92c",
                "md5": "392928b83e0cd50503e0b0420214f54d",
                "sha256": "23b24cff8bd8ec2d21bd95b4fdf83a4e4cec3b8b97b1c30a4c06bbd2b06d6c92"
            },
            "downloads": -1,
            "filename": "langchain_nvidia_ai_endpoints-0.3.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "392928b83e0cd50503e0b0420214f54d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 41160,
            "upload_time": "2024-12-16T22:42:10",
            "upload_time_iso_8601": "2024-12-16T22:42:10.789588Z",
            "url": "https://files.pythonhosted.org/packages/a7/6c/27081cb79bd44be05206cb12dfb3e800c8fc5b682a38d71cba920e6df92c/langchain_nvidia_ai_endpoints-0.3.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8b81938f48dcc3a3779ac74ea7663886e33babf6329333e738bdd09a7a083183",
                "md5": "75c5149a7e02102e1f6e730b9ddbd2fa",
                "sha256": "7ffaef13fd73235b65e7442d78f2bf2a35ca2ad63027988e9f01ff774378e05f"
            },
            "downloads": -1,
            "filename": "langchain_nvidia_ai_endpoints-0.3.7.tar.gz",
            "has_sig": false,
            "md5_digest": "75c5149a7e02102e1f6e730b9ddbd2fa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 37014,
            "upload_time": "2024-12-16T22:42:11",
            "upload_time_iso_8601": "2024-12-16T22:42:11.973942Z",
            "url": "https://files.pythonhosted.org/packages/8b/81/938f48dcc3a3779ac74ea7663886e33babf6329333e738bdd09a7a083183/langchain_nvidia_ai_endpoints-0.3.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-16 22:42:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "langchain-ai",
    "github_project": "langchain-nvidia",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "langchain-nvidia-ai-endpoints"
}
        
Elapsed time: 0.39007s