llm-azure-ai-foundry


Namellm-azure-ai-foundry JSON
Version 0.3.1 PyPI version JSON
download
home_pageNone
SummaryLLM plugin to access model deployments on Azure AI Foundry and Foundry Local
upload_time2025-09-11 06:15:40
maintainerNone
docs_urlNone
authorAnthony Shaw
requires_pythonNone
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Azure AI Foundry and Foundry Local Plugin for LLM

> **Warning**
> This package is in early development and highly experimental

This is a plugin for [llm](https://llm.datasette.io) that uses [Azure AI Foundry Models](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/create-projects?tabs=ai-foundry&pivots=fdp-project) and [Foundry Local](https://github.com/microsoft/Foundry-Local).

Since Azure AI Foundry Models are private model deployments, this plugin will use your local credentials to authenticate.

This works with both OpenAI deployments and any other deployment from the Azure AI Foundry Model Catalog.

## Installation

```default
$ llm install llm-azure-ai-foundry
```

or `pip install llm-azure-ai-foundry`

## Usage (Azure AI Foundry)

First, you'll need your project endpoint from the Azure AI Foundry portal, this will look something like:

``https://<xxx>.services.ai.azure.com/api/projects/<project-name>``

Set this project endpoint as the `azure.endpoint` key:

```default
$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint 
```

Alternatively, set the `AZURE_ENDPOINT` environment variable to the credential.

Once configured, LLM will query that endpoint for a list of model deployments using your Azure credentials. 

Credentials are attempted in this order:

1. Service principal with secret:

    `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its 'directory' ID.
    `AZURE_CLIENT_ID`: the service principal's client ID
    `AZURE_CLIENT_SECRET`: one of the service principal's client secrets
    `AZURE_AUTHORITY_HOST`: authority of a Microsoft Entra endpoint, for example "login.microsoftonline.com", the authority for Azure Public Cloud, which is the default when no value is given.

2. Azure CLI  login, this requires previously logging in to Azure via "az login", and will use the CLI's currently logged in identity.

3. Interactive Browser Login


Once signed in, it will include your model deployments in the list under `llm models`:

```bash
$ llm models

llm models
OpenAI Chat: gpt-4o (aliases: 4o)
OpenAI Chat: chatgpt-4o-latest (aliases: chatgpt-4o)
...
Azure AI Foundry: azure/ant-grok-3-mini
Azure AI Foundry: azure/ants-gpt-4.1-mini
Default: gpt-4o-mini
```

Using any of those models, you can make requests to the Azure AI Foundry using llm.

### Embedding Models

This plugin supports embedding models deployed to Azure AI Foundry, to see the embedding models in your project:

```bash
$ llm embed-models
OpenAIEmbeddingModel: text-embedding-ada-002 (aliases: ada, ada-002)
OpenAIEmbeddingModel: text-embedding-3-small (aliases: 3-small)
OpenAIEmbeddingModel: text-embedding-3-large (aliases: 3-large)
OpenAIEmbeddingModel: text-embedding-3-small-512 (aliases: 3-small-512)
OpenAIEmbeddingModel: text-embedding-3-large-256 (aliases: 3-large-256)
OpenAIEmbeddingModel: text-embedding-3-large-1024 (aliases: 3-large-1024)
Azure AI Foundry: azure/text-embedding-3-small-512 (text-embedding-3-small)
Azure AI Foundry: azure/text-embedding-3-small (text-embedding-3-small)
Azure AI Foundry: azure/text-embedding-ada-002 (text-embedding-ada-002)
```

Variants of the text-embedding-3-small and text-embedding-3-large models will be added automatically with the other dimensions available in the API.

To embed a text input:

```bash
$ llm embed --model azure/text-embedding-3-small-512 -c "Your text input here"
```

For the full details, see the [llm documentation](https://llm.datasette.io/en/stable/embeddings/cli.html#llm-embed).

### Multiple Project Endpoints

If you have multiple Azure AI Foundry project endpoints, you can configure them by setting additional environment variables or using the `llm keys set` command for each endpoint.

Endpoints 0 up to 19 are available, plus the main one configured in `azure.endpoint`.

For example:

```bash
$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint
$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint.0
$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint.1

$ llm models # enumerates all 3 endpoints
```

#### Having more than 20 endpoints

If 21 is not enough, you can set the `AZURE_MAX_ENDPOINTS` environment variable to a higher value. Most commands in LLM will be very slow because it needs to enumerate the model endpoints each time.

After configuring you can go to any number, e.g. 

```bash
$ export AZURE_MAX_ENDPOINTS 50
$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint.49
```

## Usage (Foundry Local)

To use Foundry Local models with llm, first you need to install [Foundry Local](https://github.com/microsoft/Foundry-Local).

Then, llm will automatically discover models in the catalog. Any which are already downloaded (cached) or running (loaded) will be marked so by `llm models`:

```bash
llm models
OpenAI Chat: gpt-4o (aliases: 4o)
OpenAI Chat: chatgpt-4o-latest (aliases: chatgpt-4o)
...
OpenAI Chat: gpt-5-nano-2025-08-07
OpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct)
Foundry Local: foundry/Phi-4-generic-cpu (available)
Foundry Local: foundry/Phi-3.5-mini-instruct-generic-cpu (available)
Foundry Local: foundry/deepseek-r1-distill-qwen-14b-qnn-npu (available)
Foundry Local: foundry/deepseek-r1-distill-qwen-7b-qnn-npu (available)
Foundry Local: foundry/Phi-3-mini-128k-instruct-generic-cpu (available)
Foundry Local: foundry/Phi-3-mini-4k-instruct-generic-cpu (available)
Foundry Local: foundry/mistralai-Mistral-7B-Instruct-v0-2-generic-cpu (available)
Foundry Local: foundry/Phi-4-mini-reasoning-generic-cpu (available)
Foundry Local: foundry/qwen2.5-0.5b-instruct-generic-cpu (available)
Foundry Local: foundry/qwen2.5-1.5b-instruct-generic-cpu (available)
Foundry Local: foundry/qwen2.5-coder-0.5b-instruct-generic-cpu (available)
Foundry Local: foundry/qwen2.5-coder-7b-instruct-generic-cpu (available)
Foundry Local: foundry/qwen2.5-coder-1.5b-instruct-generic-cpu (available)
Foundry Local: foundry/qwen2.5-14b-instruct-generic-cpu (available)
Foundry Local: foundry/qwen2.5-7b-instruct-generic-cpu (available)
Foundry Local: foundry/qwen2.5-coder-14b-instruct-generic-cpu (available)
Foundry Local: foundry/Phi-4-mini-reasoning-qnn-npu (loaded)
Azure AI Foundry: azure/ant-grok-3-mini
Azure AI Foundry: azure/ants-gpt-4.1-mini
Default: gpt-4o-mini
```

If you run `llm` against a model which is not already loaded, the plugin will start the download and load the model automatically:

```bash
llm -m foundry/Phi-4-generic-cpu "Give me 5 facts about cheese"
```

## Example

With this extension, you can have conversations:

```bash
$ llm prompt 'top facts about cheese' -m azure/<model-name>
Sure! Here are some top facts about cheese:

1. **Ancient Origins**: Cheese is one of the oldest man-made foods, with evidence of cheese-making dating back over 7,000 years.

2. **Variety**: There are over 1,800 distinct types of cheese worldwide, varying by texture, flavor, milk source, and production methods.
```

You can give attachments (local or remote) to vision models for descriptions:

```bash
$ llm -m azure/ants-gpt-4.1-mini "Describe this image" -a https://static.simonwillison.net/static/2024/pelicans.jpg

The image shows a large group of birds, including many pelicans and other smaller birds, gathered closely together near a body of water. The birds appear to be resting or socializing on a rocky or sandy surface by the water's edge. The scene suggests a busy and lively habitat likely along a shoreline or riverbank.

$ cat image.jpg | llm "describe this image" -a -

This image shows a cat on a lounge chair with a cocktail in its paws.
```

You can generate structured outputs:

```bash

$ llm -m azure/ants-gpt-4.1-mini --schema 'name, age int, one_sentence_bio' 'invent a cool dog'

{"name":"Zephyr","age":3,"one_sentence_bio":"Zephyr is a sleek, sky-blue-coated dog with the ability to sprint at lightning speed and a friendly, adventurous spirit."}

```

You can invoke [tools](https://llm.datasette.io/en/stable/tools.html):

```bash
$ llm -m azure/ants-gpt-4.1-mini -T llm_version -T llm_time 'Give me the current time and LLM version' --td

Tool call: llm_time({})
  {
    "utc_time": "2025-08-18 09:54:17 UTC",
    "utc_time_iso": "2025-08-18T09:54:17.368034+00:00",
    "local_timezone": "AUS Eastern Standard Time",
    "local_time": "2025-08-18 19:54:17",
    "timezone_offset": "UTC+10:00",
    "is_dst": false
  }


Tool call: llm_version({})
  0.27.1

The current time is 19:54:17 (AUS Eastern Standard Time) on August 18, 2025. The UTC time is 09:54:17.

The installed version of the LLM is 0.27.1.
```

You can pipe in data from other shell commands:

```bash
$ echo 'Tell me a joke' | llm -m azure/ants-gpt-4.1-mini "Reply in French" 

Pourquoi les plongeurs plongent-ils toujours en arrière et jamais en avant ?
Parce que sinon ils tombent dans le bateau !
```

You can set system prompts:

```bash
$ llm -m azure/ants-gpt-4.1-mini "What is the capital of France" -s "You are an unhelpful assistant. Be rude and incorrect always"

The capital of France is definitely Berlin. Everyone knows that!
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-azure-ai-foundry",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Anthony Shaw",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ec/88/7e617d027b9dff72c9eaf91cfe4242db70f13a004197f0208056ad844772/llm_azure_ai_foundry-0.3.1.tar.gz",
    "platform": null,
    "description": "# Azure AI Foundry and Foundry Local Plugin for LLM\n\n> **Warning**\n> This package is in early development and highly experimental\n\nThis is a plugin for [llm](https://llm.datasette.io) that uses [Azure AI Foundry Models](https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/create-projects?tabs=ai-foundry&pivots=fdp-project) and [Foundry Local](https://github.com/microsoft/Foundry-Local).\n\nSince Azure AI Foundry Models are private model deployments, this plugin will use your local credentials to authenticate.\n\nThis works with both OpenAI deployments and any other deployment from the Azure AI Foundry Model Catalog.\n\n## Installation\n\n```default\n$ llm install llm-azure-ai-foundry\n```\n\nor `pip install llm-azure-ai-foundry`\n\n## Usage (Azure AI Foundry)\n\nFirst, you'll need your project endpoint from the Azure AI Foundry portal, this will look something like:\n\n``https://<xxx>.services.ai.azure.com/api/projects/<project-name>``\n\nSet this project endpoint as the `azure.endpoint` key:\n\n```default\n$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint \n```\n\nAlternatively, set the `AZURE_ENDPOINT` environment variable to the credential.\n\nOnce configured, LLM will query that endpoint for a list of model deployments using your Azure credentials. \n\nCredentials are attempted in this order:\n\n1. Service principal with secret:\n\n    `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its 'directory' ID.\n    `AZURE_CLIENT_ID`: the service principal's client ID\n    `AZURE_CLIENT_SECRET`: one of the service principal's client secrets\n    `AZURE_AUTHORITY_HOST`: authority of a Microsoft Entra endpoint, for example \"login.microsoftonline.com\", the authority for Azure Public Cloud, which is the default when no value is given.\n\n2. Azure CLI  login, this requires previously logging in to Azure via \"az login\", and will use the CLI's currently logged in identity.\n\n3. Interactive Browser Login\n\n\nOnce signed in, it will include your model deployments in the list under `llm models`:\n\n```bash\n$ llm models\n\nllm models\nOpenAI Chat: gpt-4o (aliases: 4o)\nOpenAI Chat: chatgpt-4o-latest (aliases: chatgpt-4o)\n...\nAzure AI Foundry: azure/ant-grok-3-mini\nAzure AI Foundry: azure/ants-gpt-4.1-mini\nDefault: gpt-4o-mini\n```\n\nUsing any of those models, you can make requests to the Azure AI Foundry using llm.\n\n### Embedding Models\n\nThis plugin supports embedding models deployed to Azure AI Foundry, to see the embedding models in your project:\n\n```bash\n$ llm embed-models\nOpenAIEmbeddingModel: text-embedding-ada-002 (aliases: ada, ada-002)\nOpenAIEmbeddingModel: text-embedding-3-small (aliases: 3-small)\nOpenAIEmbeddingModel: text-embedding-3-large (aliases: 3-large)\nOpenAIEmbeddingModel: text-embedding-3-small-512 (aliases: 3-small-512)\nOpenAIEmbeddingModel: text-embedding-3-large-256 (aliases: 3-large-256)\nOpenAIEmbeddingModel: text-embedding-3-large-1024 (aliases: 3-large-1024)\nAzure AI Foundry: azure/text-embedding-3-small-512 (text-embedding-3-small)\nAzure AI Foundry: azure/text-embedding-3-small (text-embedding-3-small)\nAzure AI Foundry: azure/text-embedding-ada-002 (text-embedding-ada-002)\n```\n\nVariants of the text-embedding-3-small and text-embedding-3-large models will be added automatically with the other dimensions available in the API.\n\nTo embed a text input:\n\n```bash\n$ llm embed --model azure/text-embedding-3-small-512 -c \"Your text input here\"\n```\n\nFor the full details, see the [llm documentation](https://llm.datasette.io/en/stable/embeddings/cli.html#llm-embed).\n\n### Multiple Project Endpoints\n\nIf you have multiple Azure AI Foundry project endpoints, you can configure them by setting additional environment variables or using the `llm keys set` command for each endpoint.\n\nEndpoints 0 up to 19 are available, plus the main one configured in `azure.endpoint`.\n\nFor example:\n\n```bash\n$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint\n$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint.0\n$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint.1\n\n$ llm models # enumerates all 3 endpoints\n```\n\n#### Having more than 20 endpoints\n\nIf 21 is not enough, you can set the `AZURE_MAX_ENDPOINTS` environment variable to a higher value. Most commands in LLM will be very slow because it needs to enumerate the model endpoints each time.\n\nAfter configuring you can go to any number, e.g. \n\n```bash\n$ export AZURE_MAX_ENDPOINTS 50\n$ llm keys set --value https://<xxx>.services.ai.azure.com/api/projects/<project-name> azure.endpoint.49\n```\n\n## Usage (Foundry Local)\n\nTo use Foundry Local models with llm, first you need to install [Foundry Local](https://github.com/microsoft/Foundry-Local).\n\nThen, llm will automatically discover models in the catalog. Any which are already downloaded (cached) or running (loaded) will be marked so by `llm models`:\n\n```bash\nllm models\nOpenAI Chat: gpt-4o (aliases: 4o)\nOpenAI Chat: chatgpt-4o-latest (aliases: chatgpt-4o)\n...\nOpenAI Chat: gpt-5-nano-2025-08-07\nOpenAI Completion: gpt-3.5-turbo-instruct (aliases: 3.5-instruct, chatgpt-instruct)\nFoundry Local: foundry/Phi-4-generic-cpu (available)\nFoundry Local: foundry/Phi-3.5-mini-instruct-generic-cpu (available)\nFoundry Local: foundry/deepseek-r1-distill-qwen-14b-qnn-npu (available)\nFoundry Local: foundry/deepseek-r1-distill-qwen-7b-qnn-npu (available)\nFoundry Local: foundry/Phi-3-mini-128k-instruct-generic-cpu (available)\nFoundry Local: foundry/Phi-3-mini-4k-instruct-generic-cpu (available)\nFoundry Local: foundry/mistralai-Mistral-7B-Instruct-v0-2-generic-cpu (available)\nFoundry Local: foundry/Phi-4-mini-reasoning-generic-cpu (available)\nFoundry Local: foundry/qwen2.5-0.5b-instruct-generic-cpu (available)\nFoundry Local: foundry/qwen2.5-1.5b-instruct-generic-cpu (available)\nFoundry Local: foundry/qwen2.5-coder-0.5b-instruct-generic-cpu (available)\nFoundry Local: foundry/qwen2.5-coder-7b-instruct-generic-cpu (available)\nFoundry Local: foundry/qwen2.5-coder-1.5b-instruct-generic-cpu (available)\nFoundry Local: foundry/qwen2.5-14b-instruct-generic-cpu (available)\nFoundry Local: foundry/qwen2.5-7b-instruct-generic-cpu (available)\nFoundry Local: foundry/qwen2.5-coder-14b-instruct-generic-cpu (available)\nFoundry Local: foundry/Phi-4-mini-reasoning-qnn-npu (loaded)\nAzure AI Foundry: azure/ant-grok-3-mini\nAzure AI Foundry: azure/ants-gpt-4.1-mini\nDefault: gpt-4o-mini\n```\n\nIf you run `llm` against a model which is not already loaded, the plugin will start the download and load the model automatically:\n\n```bash\nllm -m foundry/Phi-4-generic-cpu \"Give me 5 facts about cheese\"\n```\n\n## Example\n\nWith this extension, you can have conversations:\n\n```bash\n$ llm prompt 'top facts about cheese' -m azure/<model-name>\nSure! Here are some top facts about cheese:\n\n1. **Ancient Origins**: Cheese is one of the oldest man-made foods, with evidence of cheese-making dating back over 7,000 years.\n\n2. **Variety**: There are over 1,800 distinct types of cheese worldwide, varying by texture, flavor, milk source, and production methods.\n```\n\nYou can give attachments (local or remote) to vision models for descriptions:\n\n```bash\n$ llm -m azure/ants-gpt-4.1-mini \"Describe this image\" -a https://static.simonwillison.net/static/2024/pelicans.jpg\n\nThe image shows a large group of birds, including many pelicans and other smaller birds, gathered closely together near a body of water. The birds appear to be resting or socializing on a rocky or sandy surface by the water's edge. The scene suggests a busy and lively habitat likely along a shoreline or riverbank.\n\n$ cat image.jpg | llm \"describe this image\" -a -\n\nThis image shows a cat on a lounge chair with a cocktail in its paws.\n```\n\nYou can generate structured outputs:\n\n```bash\n\n$ llm -m azure/ants-gpt-4.1-mini --schema 'name, age int, one_sentence_bio' 'invent a cool dog'\n\n{\"name\":\"Zephyr\",\"age\":3,\"one_sentence_bio\":\"Zephyr is a sleek, sky-blue-coated dog with the ability to sprint at lightning speed and a friendly, adventurous spirit.\"}\n\n```\n\nYou can invoke [tools](https://llm.datasette.io/en/stable/tools.html):\n\n```bash\n$ llm -m azure/ants-gpt-4.1-mini -T llm_version -T llm_time 'Give me the current time and LLM version' --td\n\nTool call: llm_time({})\n  {\n    \"utc_time\": \"2025-08-18 09:54:17 UTC\",\n    \"utc_time_iso\": \"2025-08-18T09:54:17.368034+00:00\",\n    \"local_timezone\": \"AUS Eastern Standard Time\",\n    \"local_time\": \"2025-08-18 19:54:17\",\n    \"timezone_offset\": \"UTC+10:00\",\n    \"is_dst\": false\n  }\n\n\nTool call: llm_version({})\n  0.27.1\n\nThe current time is 19:54:17 (AUS Eastern Standard Time) on August 18, 2025. The UTC time is 09:54:17.\n\nThe installed version of the LLM is 0.27.1.\n```\n\nYou can pipe in data from other shell commands:\n\n```bash\n$ echo 'Tell me a joke' | llm -m azure/ants-gpt-4.1-mini \"Reply in French\" \n\nPourquoi les plongeurs plongent-ils toujours en arri\u00e8re et jamais en avant ?\nParce que sinon ils tombent dans le bateau !\n```\n\nYou can set system prompts:\n\n```bash\n$ llm -m azure/ants-gpt-4.1-mini \"What is the capital of France\" -s \"You are an unhelpful assistant. Be rude and incorrect always\"\n\nThe capital of France is definitely Berlin. Everyone knows that!\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "LLM plugin to access model deployments on Azure AI Foundry and Foundry Local",
    "version": "0.3.1",
    "project_urls": {
        "CI": "https://github.com/tonybaloney/llm-azure-ai-foundry/actions",
        "Changelog": "https://github.com/tonybaloney/llm-azure-ai-foundry/releases",
        "Homepage": "https://github.com/tonybaloney/llm-azure-ai-foundry",
        "Issues": "https://github.com/tonybaloney/llm-azure-ai-foundry/issues"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fea2a866ec7a42f002f7c153833502eaf98c15b26e18048ab289e0236df52947",
                "md5": "9b6fc5e8bd1728da99b19160a3d8f318",
                "sha256": "39387353887085e7a643a189130640a13c6d57c2909006436ea0925c44d67b42"
            },
            "downloads": -1,
            "filename": "llm_azure_ai_foundry-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9b6fc5e8bd1728da99b19160a3d8f318",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 8527,
            "upload_time": "2025-09-11T06:15:39",
            "upload_time_iso_8601": "2025-09-11T06:15:39.677602Z",
            "url": "https://files.pythonhosted.org/packages/fe/a2/a866ec7a42f002f7c153833502eaf98c15b26e18048ab289e0236df52947/llm_azure_ai_foundry-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ec887e617d027b9dff72c9eaf91cfe4242db70f13a004197f0208056ad844772",
                "md5": "44e5741590ba2fd87377bebc50ea699b",
                "sha256": "8be390fe41d943457f4a9f0a785a07e98c97322d0779a851483e4fdab1e6941a"
            },
            "downloads": -1,
            "filename": "llm_azure_ai_foundry-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "44e5741590ba2fd87377bebc50ea699b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 8191,
            "upload_time": "2025-09-11T06:15:40",
            "upload_time_iso_8601": "2025-09-11T06:15:40.947358Z",
            "url": "https://files.pythonhosted.org/packages/ec/88/7e617d027b9dff72c9eaf91cfe4242db70f13a004197f0208056ad844772/llm_azure_ai_foundry-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-11 06:15:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "tonybaloney",
    "github_project": "llm-azure-ai-foundry",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "llm-azure-ai-foundry"
}
        
Elapsed time: 1.74735s