# Amazon Rekognition MCP Server (DEPRECATED)
A Model Context Protocol (MCP) server for Amazon Rekognition that enables AI assistants to analyze images using Amazon Rekognition's powerful computer vision capabilities (DEPRECATED). Please use [AWS API MCP Server](https://github.com/awslabs/mcp/tree/main/src/aws-api-mcp-server) for analyzing images using Amazon Rekognition's APIs.
## Features
- **Face Collection Management**: Create and manage collections of faces
- **Face Recognition**: Index and search for faces in images
- **Object and Scene Detection**: Identify objects, scenes, and activities in images
- **Content Moderation**: Detect unsafe or inappropriate content
- **Celebrity Recognition**: Identify celebrities in images
- **Face Comparison**: Compare faces between images for similarity
- **Text Detection**: Extract text from images
## Prerequisites
1. Install `uv` from [Astral](https://docs.astral.sh/uv/getting-started/installation/) or the [GitHub README](https://github.com/astral-sh/uv#installation)
2. Install Python using `uv python install 3.10`
3. Set up AWS credentials with access to Amazon Rekognition
- You need an AWS account with Amazon Rekognition enabled
- Configure AWS credentials with `aws configure` or environment variables
- Ensure your IAM role/user has permissions to use Amazon Rekognition
## Installation
(DEPRECATED). Please use [AWS API MCP Server](https://github.com/awslabs/mcp/tree/main/src/aws-api-mcp-server) for analyzing images using Amazon Rekognition's APIs.
| Cursor | VS Code |
|:------:|:-------:|
| [](https://cursor.com/en/install-mcp?name=awslabs.amazon-rekognition-mcp-server&config=eyJjb21tYW5kIjoidXZ4IGF3c2xhYnMuYW1hem9uLXJla29nbml0aW9uLW1jcC1zZXJ2ZXJAbGF0ZXN0IiwiZW52Ijp7IkFXU19QUk9GSUxFIjoieW91ci1hd3MtcHJvZmlsZSIsIkFXU19SRUdJT04iOiJ1cy1lYXN0LTEiLCJGQVNUTUNQX0xPR19MRVZFTCI6IkVSUk9SIn0sImRpc2FibGVkIjpmYWxzZSwiYXV0b0FwcHJvdmUiOltdfQ%3D%3D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=Amazon%20Rekognition%20MCP%20Server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.amazon-rekognition-mcp-server%40latest%22%5D%2C%22env%22%3A%7B%22AWS_PROFILE%22%3A%22your-aws-profile%22%2C%22AWS_REGION%22%3A%22us-east-1%22%2C%22FASTMCP_LOG_LEVEL%22%3A%22ERROR%22%7D%2C%22disabled%22%3Afalse%2C%22autoApprove%22%3A%5B%5D%7D) |
Configure the MCP server in your MCP client configuration (e.g., for Amazon Q Developer CLI, edit `~/.aws/amazonq/mcp.json`):
```json
{
"mcpServers": {
"awslabs.amazon-rekognition-mcp-server": {
"command": "uvx",
"args": ["awslabs.amazon-rekognition-mcp-server@latest"],
"env": {
"AWS_PROFILE": "your-aws-profile",
"AWS_REGION": "us-east-1",
"BASE_DIR": "/path/to/base/directory",
"FASTMCP_LOG_LEVEL": "ERROR"
},
"disabled": false,
"autoApprove": []
}
}
}
```
### Windows Installation
For Windows users, the MCP server configuration format is slightly different:
```json
{
"mcpServers": {
"awslabs.amazon-rekognition-mcp-server": {
"disabled": false,
"timeout": 60,
"type": "stdio",
"command": "uv",
"args": [
"tool",
"run",
"--from",
"awslabs.amazon-rekognition-mcp-server@latest",
"awslabs.amazon-rekognition-mcp-server.exe"
],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR",
"AWS_PROFILE": "your-aws-profile",
"AWS_REGION": "us-east-1"
}
}
}
}
```
or docker after a successful `docker build -t awslabs/amazon-rekognition-mcp-server .`:
```file
# fictitious `.env` file with AWS temporary credentials
AWS_ACCESS_KEY_ID=<from the profile you set up>
AWS_SECRET_ACCESS_KEY=<from the profile you set up>
AWS_SESSION_TOKEN=<from the profile you set up>
AWS_REGION=<your-region>
BASE_DIR=/path/to/base/directory
```
```json
{
"mcpServers": {
"awslabs.amazon-rekognition-mcp-server": {
"command": "docker",
"args": [
"run",
"--rm",
"--interactive",
"--env-file",
"/full/path/to/file/above/.env",
"awslabs/amazon-rekognition-mcp-server:latest"
],
"env": {},
"disabled": false,
"autoApprove": []
}
}
}
```
NOTE: Your credentials will need to be kept refreshed from your host
## Environment Variables
- `AWS_PROFILE`: AWS CLI profile to use for credentials
- `AWS_REGION`: AWS region to use (default: us-east-1)
- `BASE_DIR`: Base directory for file operations (optional)
- `FASTMCP_LOG_LEVEL`: Logging level (ERROR, WARNING, INFO, DEBUG)
## AWS Authentication
The server uses the AWS profile specified in the `AWS_PROFILE` environment variable. If not provided, it defaults to the default credential provider chain.
```json
"env": {
"AWS_PROFILE": "your-aws-profile",
"AWS_REGION": "us-east-1"
}
```
Make sure the AWS profile has permissions to access Amazon Rekognition services. The MCP server creates a boto3 session using the specified profile to authenticate with AWS services.
## Tools
### list_collections
Returns a list of collection IDs in your account.
```python
list_collections() -> dict
```
Returns a dictionary containing a list of collection IDs and face model versions.
### index_faces
Detects faces in an image and adds them to the specified collection.
```python
index_faces(collection_id: str, image_path: str) -> dict
```
Parameters:
- `collection_id`: ID of the collection to add the face to
- `image_path`: Path to the image file
Returns a dictionary containing information about the indexed faces.
### search_faces_by_image
Searches for faces in a collection that match a supplied face.
```python
search_faces_by_image(collection_id: str, image_path: str) -> dict
```
Parameters:
- `collection_id`: ID of the collection to search
- `image_path`: Path to the image file
Returns a dictionary containing information about the matching faces.
### detect_labels
Detects instances of real-world entities within an image.
```python
detect_labels(image_path: str) -> dict
```
Parameters:
- `image_path`: Path to the image file
Returns a dictionary containing detected labels and other metadata.
### detect_moderation_labels
Detects unsafe content in an image.
```python
detect_moderation_labels(image_path: str) -> dict
```
Parameters:
- `image_path`: Path to the image file
Returns a dictionary containing detected moderation labels and other metadata.
### recognize_celebrities
Recognizes celebrities in an image.
```python
recognize_celebrities(image_path: str) -> dict
```
Parameters:
- `image_path`: Path to the image file
Returns a dictionary containing recognized celebrities and other metadata.
### compare_faces
Compares a face in the source input image with faces in the target input image.
```python
compare_faces(source_image_path: str, target_image_path: str) -> dict
```
Parameters:
- `source_image_path`: Path to the source image file
- `target_image_path`: Path to the target image file
Returns a dictionary containing information about the face matches.
### detect_text
Detects text in an image.
```python
detect_text(image_path: str) -> dict
```
Parameters:
- `image_path`: Path to the image file
Returns a dictionary containing detected text elements and their metadata.
## Example Usage
```python
# List available face collections
collections = await list_collections()
# Index a face in a collection
indexed_face = await index_faces(
collection_id="my-collection",
image_path="/path/to/face.jpg"
)
# Search for a face in a collection
matches = await search_faces_by_image(
collection_id="my-collection",
image_path="/path/to/face.jpg"
)
# Detect labels in an image
labels = await detect_labels(
image_path="/path/to/image.jpg"
)
# Detect moderation labels in an image
moderation = await detect_moderation_labels(
image_path="/path/to/image.jpg"
)
# Recognize celebrities in an image
celebrities = await recognize_celebrities(
image_path="/path/to/celebrity.jpg"
)
# Compare faces between two images
comparison = await compare_faces(
source_image_path="/path/to/source.jpg",
target_image_path="/path/to/target.jpg"
)
# Detect text in an image
text = await detect_text(
image_path="/path/to/image_with_text.jpg"
)
```
## Security Considerations
- Use AWS IAM roles with appropriate permissions
- Store credentials securely
- Use temporary credentials when possible
- Be aware of Amazon Rekognition service quotas and limits
## License
This project is licensed under the Apache License, Version 2.0. See the [LICENSE](https://github.com/awslabs/mcp/blob/main/src/amazon-rekognition-mcp-server/LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "awslabs.amazon-rekognition-mcp-server",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Amazon Web Services",
"author_email": "AWSLabs MCP <203918161+awslabs-mcp@users.noreply.github.com>, Ayush Goyal <ayush987goyal@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/cc/55/0331ccb2fb97a573cfb23f25aa4c7778ecfd674d4e9f5921ecdc0cb4d1f4/awslabs_amazon_rekognition_mcp_server-0.0.8.tar.gz",
"platform": null,
"description": "# Amazon Rekognition MCP Server (DEPRECATED)\n\nA Model Context Protocol (MCP) server for Amazon Rekognition that enables AI assistants to analyze images using Amazon Rekognition's powerful computer vision capabilities (DEPRECATED). Please use [AWS API MCP Server](https://github.com/awslabs/mcp/tree/main/src/aws-api-mcp-server) for analyzing images using Amazon Rekognition's APIs.\n\n## Features\n\n- **Face Collection Management**: Create and manage collections of faces\n- **Face Recognition**: Index and search for faces in images\n- **Object and Scene Detection**: Identify objects, scenes, and activities in images\n- **Content Moderation**: Detect unsafe or inappropriate content\n- **Celebrity Recognition**: Identify celebrities in images\n- **Face Comparison**: Compare faces between images for similarity\n- **Text Detection**: Extract text from images\n\n## Prerequisites\n\n1. Install `uv` from [Astral](https://docs.astral.sh/uv/getting-started/installation/) or the [GitHub README](https://github.com/astral-sh/uv#installation)\n2. Install Python using `uv python install 3.10`\n3. Set up AWS credentials with access to Amazon Rekognition\n - You need an AWS account with Amazon Rekognition enabled\n - Configure AWS credentials with `aws configure` or environment variables\n - Ensure your IAM role/user has permissions to use Amazon Rekognition\n\n## Installation\n\n(DEPRECATED). Please use [AWS API MCP Server](https://github.com/awslabs/mcp/tree/main/src/aws-api-mcp-server) for analyzing images using Amazon Rekognition's APIs.\n\n| Cursor | VS Code |\n|:------:|:-------:|\n| [](https://cursor.com/en/install-mcp?name=awslabs.amazon-rekognition-mcp-server&config=eyJjb21tYW5kIjoidXZ4IGF3c2xhYnMuYW1hem9uLXJla29nbml0aW9uLW1jcC1zZXJ2ZXJAbGF0ZXN0IiwiZW52Ijp7IkFXU19QUk9GSUxFIjoieW91ci1hd3MtcHJvZmlsZSIsIkFXU19SRUdJT04iOiJ1cy1lYXN0LTEiLCJGQVNUTUNQX0xPR19MRVZFTCI6IkVSUk9SIn0sImRpc2FibGVkIjpmYWxzZSwiYXV0b0FwcHJvdmUiOltdfQ%3D%3D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=Amazon%20Rekognition%20MCP%20Server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.amazon-rekognition-mcp-server%40latest%22%5D%2C%22env%22%3A%7B%22AWS_PROFILE%22%3A%22your-aws-profile%22%2C%22AWS_REGION%22%3A%22us-east-1%22%2C%22FASTMCP_LOG_LEVEL%22%3A%22ERROR%22%7D%2C%22disabled%22%3Afalse%2C%22autoApprove%22%3A%5B%5D%7D) |\n\nConfigure the MCP server in your MCP client configuration (e.g., for Amazon Q Developer CLI, edit `~/.aws/amazonq/mcp.json`):\n\n```json\n{\n \"mcpServers\": {\n \"awslabs.amazon-rekognition-mcp-server\": {\n \"command\": \"uvx\",\n \"args\": [\"awslabs.amazon-rekognition-mcp-server@latest\"],\n \"env\": {\n \"AWS_PROFILE\": \"your-aws-profile\",\n \"AWS_REGION\": \"us-east-1\",\n \"BASE_DIR\": \"/path/to/base/directory\",\n \"FASTMCP_LOG_LEVEL\": \"ERROR\"\n },\n \"disabled\": false,\n \"autoApprove\": []\n }\n }\n}\n```\n### Windows Installation\n\nFor Windows users, the MCP server configuration format is slightly different:\n\n```json\n{\n \"mcpServers\": {\n \"awslabs.amazon-rekognition-mcp-server\": {\n \"disabled\": false,\n \"timeout\": 60,\n \"type\": \"stdio\",\n \"command\": \"uv\",\n \"args\": [\n \"tool\",\n \"run\",\n \"--from\",\n \"awslabs.amazon-rekognition-mcp-server@latest\",\n \"awslabs.amazon-rekognition-mcp-server.exe\"\n ],\n \"env\": {\n \"FASTMCP_LOG_LEVEL\": \"ERROR\",\n \"AWS_PROFILE\": \"your-aws-profile\",\n \"AWS_REGION\": \"us-east-1\"\n }\n }\n }\n}\n```\n\n\nor docker after a successful `docker build -t awslabs/amazon-rekognition-mcp-server .`:\n\n```file\n# fictitious `.env` file with AWS temporary credentials\nAWS_ACCESS_KEY_ID=<from the profile you set up>\nAWS_SECRET_ACCESS_KEY=<from the profile you set up>\nAWS_SESSION_TOKEN=<from the profile you set up>\nAWS_REGION=<your-region>\nBASE_DIR=/path/to/base/directory\n```\n\n```json\n{\n \"mcpServers\": {\n \"awslabs.amazon-rekognition-mcp-server\": {\n \"command\": \"docker\",\n \"args\": [\n \"run\",\n \"--rm\",\n \"--interactive\",\n \"--env-file\",\n \"/full/path/to/file/above/.env\",\n \"awslabs/amazon-rekognition-mcp-server:latest\"\n ],\n \"env\": {},\n \"disabled\": false,\n \"autoApprove\": []\n }\n }\n}\n```\nNOTE: Your credentials will need to be kept refreshed from your host\n\n## Environment Variables\n\n- `AWS_PROFILE`: AWS CLI profile to use for credentials\n- `AWS_REGION`: AWS region to use (default: us-east-1)\n- `BASE_DIR`: Base directory for file operations (optional)\n- `FASTMCP_LOG_LEVEL`: Logging level (ERROR, WARNING, INFO, DEBUG)\n\n## AWS Authentication\n\nThe server uses the AWS profile specified in the `AWS_PROFILE` environment variable. If not provided, it defaults to the default credential provider chain.\n\n```json\n\"env\": {\n \"AWS_PROFILE\": \"your-aws-profile\",\n \"AWS_REGION\": \"us-east-1\"\n}\n```\n\nMake sure the AWS profile has permissions to access Amazon Rekognition services. The MCP server creates a boto3 session using the specified profile to authenticate with AWS services.\n\n## Tools\n\n### list_collections\n\nReturns a list of collection IDs in your account.\n\n```python\nlist_collections() -> dict\n```\n\nReturns a dictionary containing a list of collection IDs and face model versions.\n\n### index_faces\n\nDetects faces in an image and adds them to the specified collection.\n\n```python\nindex_faces(collection_id: str, image_path: str) -> dict\n```\n\nParameters:\n- `collection_id`: ID of the collection to add the face to\n- `image_path`: Path to the image file\n\nReturns a dictionary containing information about the indexed faces.\n\n### search_faces_by_image\n\nSearches for faces in a collection that match a supplied face.\n\n```python\nsearch_faces_by_image(collection_id: str, image_path: str) -> dict\n```\n\nParameters:\n- `collection_id`: ID of the collection to search\n- `image_path`: Path to the image file\n\nReturns a dictionary containing information about the matching faces.\n\n### detect_labels\n\nDetects instances of real-world entities within an image.\n\n```python\ndetect_labels(image_path: str) -> dict\n```\n\nParameters:\n- `image_path`: Path to the image file\n\nReturns a dictionary containing detected labels and other metadata.\n\n### detect_moderation_labels\n\nDetects unsafe content in an image.\n\n```python\ndetect_moderation_labels(image_path: str) -> dict\n```\n\nParameters:\n- `image_path`: Path to the image file\n\nReturns a dictionary containing detected moderation labels and other metadata.\n\n### recognize_celebrities\n\nRecognizes celebrities in an image.\n\n```python\nrecognize_celebrities(image_path: str) -> dict\n```\n\nParameters:\n- `image_path`: Path to the image file\n\nReturns a dictionary containing recognized celebrities and other metadata.\n\n### compare_faces\n\nCompares a face in the source input image with faces in the target input image.\n\n```python\ncompare_faces(source_image_path: str, target_image_path: str) -> dict\n```\n\nParameters:\n- `source_image_path`: Path to the source image file\n- `target_image_path`: Path to the target image file\n\nReturns a dictionary containing information about the face matches.\n\n### detect_text\n\nDetects text in an image.\n\n```python\ndetect_text(image_path: str) -> dict\n```\n\nParameters:\n- `image_path`: Path to the image file\n\nReturns a dictionary containing detected text elements and their metadata.\n\n## Example Usage\n\n```python\n# List available face collections\ncollections = await list_collections()\n\n# Index a face in a collection\nindexed_face = await index_faces(\n collection_id=\"my-collection\",\n image_path=\"/path/to/face.jpg\"\n)\n\n# Search for a face in a collection\nmatches = await search_faces_by_image(\n collection_id=\"my-collection\",\n image_path=\"/path/to/face.jpg\"\n)\n\n# Detect labels in an image\nlabels = await detect_labels(\n image_path=\"/path/to/image.jpg\"\n)\n\n# Detect moderation labels in an image\nmoderation = await detect_moderation_labels(\n image_path=\"/path/to/image.jpg\"\n)\n\n# Recognize celebrities in an image\ncelebrities = await recognize_celebrities(\n image_path=\"/path/to/celebrity.jpg\"\n)\n\n# Compare faces between two images\ncomparison = await compare_faces(\n source_image_path=\"/path/to/source.jpg\",\n target_image_path=\"/path/to/target.jpg\"\n)\n\n# Detect text in an image\ntext = await detect_text(\n image_path=\"/path/to/image_with_text.jpg\"\n)\n```\n\n## Security Considerations\n\n- Use AWS IAM roles with appropriate permissions\n- Store credentials securely\n- Use temporary credentials when possible\n- Be aware of Amazon Rekognition service quotas and limits\n\n## License\n\nThis project is licensed under the Apache License, Version 2.0. See the [LICENSE](https://github.com/awslabs/mcp/blob/main/src/amazon-rekognition-mcp-server/LICENSE) file for details.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "An AWS Labs Model Context Protocol (MCP) server for Amazon Rekognition",
"version": "0.0.8",
"project_urls": {
"changelog": "https://github.com/awslabs/mcp/blob/main/src/amazon-rekognition-mcp-server/CHANGELOG.md",
"docs": "https://awslabs.github.io/mcp/servers/amazon-rekognition-mcp-server/",
"documentation": "https://awslabs.github.io/mcp/servers/amazon-rekognition-mcp-server/",
"homepage": "https://awslabs.github.io/mcp/",
"repository": "https://github.com/awslabs/mcp.git"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "eb758e09f08eab18d8146ede22255d853a2d24723a10ac2916b9eb72af9e2ccb",
"md5": "7ce3a95117e1a9a6049df834248e7961",
"sha256": "7ff65a2e9aea4ea8603fa3284dcbdc43fceb7f47b8b7edb08525c25327b933cc"
},
"downloads": -1,
"filename": "awslabs_amazon_rekognition_mcp_server-0.0.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7ce3a95117e1a9a6049df834248e7961",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 16891,
"upload_time": "2025-10-13T20:41:13",
"upload_time_iso_8601": "2025-10-13T20:41:13.378908Z",
"url": "https://files.pythonhosted.org/packages/eb/75/8e09f08eab18d8146ede22255d853a2d24723a10ac2916b9eb72af9e2ccb/awslabs_amazon_rekognition_mcp_server-0.0.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "cc550331ccb2fb97a573cfb23f25aa4c7778ecfd674d4e9f5921ecdc0cb4d1f4",
"md5": "90cbdaf69a65116f74ab2c29e1933250",
"sha256": "c2d59001874618a8e0c0c3305b6b1ed7a8a5084918c18cc901aab3389e1b63f1"
},
"downloads": -1,
"filename": "awslabs_amazon_rekognition_mcp_server-0.0.8.tar.gz",
"has_sig": false,
"md5_digest": "90cbdaf69a65116f74ab2c29e1933250",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 82953,
"upload_time": "2025-10-13T20:41:16",
"upload_time_iso_8601": "2025-10-13T20:41:16.571540Z",
"url": "https://files.pythonhosted.org/packages/cc/55/0331ccb2fb97a573cfb23f25aa4c7778ecfd674d4e9f5921ecdc0cb4d1f4/awslabs_amazon_rekognition_mcp_server-0.0.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-13 20:41:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "awslabs",
"github_project": "mcp",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "awslabs.amazon-rekognition-mcp-server"
}